GPT-5

OpenAI unveils GPT-4.5 'Orion,' its largest AI model yet

OpenAI announced on Thursday it is launching GPT-4.5, the much-anticipated AI model code-named Orion. GPT-4.5 is OpenAI's largest model to date, trained using more computing power and data than any of the company's previous releases. As for other ChatGPT users, customers signed up for ChatGPT Plus and ChatGPT Team should get the model sometime next week, an OpenAI spokesperson told TechCrunch.

Show HN: AIDictation – zero data retention dictation app

Hi HN,I built AIDictation.com, a voice to text app written in Swift. It sends audio to my own backend, runs it through a Whisper-based pipeline, and returns a transcription you can then send straight into an AI chat like ChatGPT or Claude.I’ve been building full‑stack apps for ~20 years, but this is my first Swift application. I leaned heavily on AI coding tools to get from zero Swift to a working app and backend in a couple of weeks.What it doesRecords audio and sends it to my server. The backe

Show HN: Guide – to help me get moving and keep the momentum going

Hi HN, I built Guide (leaning almost exclusively on Gemini Pro 2.5) for myself initially to get over the blank-page-hurdle while starting on a goal. As well as to reduce the overwhelm from huge lists of to-do when I got into it.It's Local. Single file HTML+CSS+JS onlyYou can see the code and try it out here: https://github.com/nextfiveinc/guide (just save to your phone / desktop and start playing;)What it does: Guide is a task management app based on CBT principles

Tell HN: ChatGPT has 13B revenue but can't make a working website

People talk a lot about AI being really great. But can OpenAI use it to make their own website work?The ChatGPT performance is terrible, and has been for months. I use ChatGPT on both a Mac M3 w/32GB RAM, and a AMD Ryzen 7 PRO 7840U (w/ Radeon 780M) & 32GB RAM. After I have been using a single ChatGPT page/thread/whatever to do research for a day or two, it becomes insanely slow, locking up the browser for minutes at a time. The only recourse is to stop the page, kill the

Show HN: Changelog-bot – Generate CHANGELOG.md from Git and release notes

Hi HN! I built `changelog-bot`, a small CLI/GitHub Action that generates CHANGELOG.md entries from your git history + release notes with AI.I wanted something that: - doesn’t require labels or strict conventions - works in CI without scripting - can optionally use LLMs (OpenAI/Anthropic) but never requires them - falls back to a deterministic git/PR-based summary when no API key is setUsage example: pnpm dlx @nyaomaru/changelog-bot --release-tag HEAD --dry-run Or as a GitHu

OpenAI Releases Its ‘More Conversational’ GPT-5.1

OpenAI dropped a mini-update to its model, releasing GPT-5.1 to the public. According to the company, the update will make its chatbot, ChatGPT, “smarter” and “more conversational,” introducing new ...

OpenAI's GPT-5 is here and free for all ChatGPT users

At long last, GPT-5 has arrived — and it's free to all ChatGPT users. On Thursday, OpenAI launched what it describes as its smartest and fastest model yet during a livestream event. In a press ...

GPT-5 is here and your network needs to catch up

Security has to keep pace as well. GPT-5 interacts with sensitive data, often pulling from live internal systems like ...

Who Owns Claude AI (And Is It Amazon?)

Amazon's huge investment in Anthropic sparked speculation over Claude AI's ownership. We break down the funding, partnerships ...

Show HN: I built a local fuzzing tool to red-team LLM agents (Python, SQLite)

I spent the last week building a local-first security tool because I was tired of paying $500/mo for enterprise SaaS just to test my AI agents for basic vulnerabilities.The tool is called Agent Exam Pro. It's a Python-based fuzzer that runs locally on your machine (no cloud data leaks).How it works:The Engine: Takes a base test case and runs it through 16 mutation strategies (Base64, Roleplay, Token Smuggling) to generate 1,000+ variations.The Payloads: I curated 280+ real-world exploi

Show HN: LLM-models – a CLI tool to list available LLM models across providers

I built a simple CLI tool to solve a problem I kept running into: which exact model names are actually available through OpenAI, Anthropic, Google, and xAI APIs at any given time?The APIs themselves provide this info, but I got tired of checking docs or writing one-off scripts. Now I can just run:$ llm-models -p Anthropicand get the current list with human-readable names.Installation: macOS: brew tap ljbuturovic/tap && brew install llm-models Linux: pipx install llm-models Wind

Show HN: Kodaii generated a 20K-line FastAPI back end from one prompt

We’ve been working on the Kodaii engine, aimed at generating complete backends that stay coherent across models, routes, workflows, and tests — not just isolated snippets.To get a sense of how well the engine handles a real project, we asked it to build a Calendly-style booking system from a single prompt. It ran the whole process — planning, code generation, tests, infra, and deployment — in about 8 hours.What it generated: - ~20K lines of Python (FastAPI, async)- Postgres schema (6 tables)- Se

Tell HN: OpenAI Security Incident with PII

Today I got the following email from OpenAI:Subject: Third-party security incidentFrom: OpenAI <noreply@email.openai.com>Transparency is important to us, so we want to inform you about a recent security incident at Mixpanel, a data analytics provider that OpenAI used for web analytics on the frontend interface for our API product (platform.openai.com). The incident occurred within Mixpanel’s systems and involved limited analytics data related to your API account.This was not a breach of Op

Show HN: Superglue – OSS integration tool that understands your legacy systems

If you've ever worked in a large company, you've probably encountered "shadow infrastructure": scripts nobody understands or custom connectors written once and never touched again. This glue layer isn't documented, isn't owned by anyone, and tends to break when systems are upgraded or someone leaves. It's also the part everybody dreads working on, because it's hard to understand, painful to work with, and full of unknown unknowns.We built superglue so that

OpenAI's new GPT‑5.1-Codex-Max — all about the agentic coding model that can work for long hours

Max, a new coding model designed for detailed and long-running software development tasks. Here is an overview of the model ...

What to be thankful for in AI in 2025

Liquid AI spent 2025 pushing its Liquid Foundation Models (LFM2) and LFM2-VL vision-language variants, designed from day one for low-latency, device-aware deployments — edge boxes, robots, and ...

OpenAI debuts GPT‑5.1-Codex-Max coding model and it already completed a 24-hour task internally

Max, a new frontier agentic coding model now available in its Codex developer environment. The release marks a significant step forward in AI-assisted software engineering, offering improved ...

How OpenAI Ships New Products With Lightning Speed

OpenAI has shipped new products at a relentless clip in the second half of 2025. Not only has the company released several ...

ChatGPT 5.1 Codex Max : AI Coder Handles Massive PRs, Reviews & Debugging at Scale

OpenAI’s GPT 5.1 Codex Max runs 24-hour workflows, handles multifile refactors, reaches 80% accuracy, and uses 30% fewer tokens to reduce costs

I built an open-weights memory system that reaches 80.1% on the LoCoMo benchmark

I’ve been experimenting with long-term memory architectures for agent systems and wanted to share some technical results that might be useful to others working on retrieval pipelines.Benchmark: LoCoMo (10 runs × 10 conversation sets) Average accuracy: 80.1% Setup: full isolation across all 10 conv groups (no cross-contamination, no shared memory between runs)Architecture (all open weights except answer generation)1. Dense retrievalBGE-large-en-v1.5 (1024d)FAISS IndexFlatIPStandard BGE instructio