OpenAI Codex flaw let attackers run arbitrary code
Check Point Research has found a flaw in OpenAI’s AI coding tool, Codex, that would allow bad actors to exfiltrate data without flagging security alerts.
Check Point Research has found a flaw in OpenAI’s AI coding tool, Codex, that would allow bad actors to exfiltrate data without flagging security alerts.
OpenAI launched Codex, an AI tool to write codes and fix bugs for developers. As an AI Agent, Codex could also help users with an Amazon order or a dinner reservation. Codex and GPT-4.5, which was ...
A major supply chain vulnerability in the OpenAI Codex CLI has been patched after discovery by Check Point Research.
OpenAI has shipped new products at a relentless clip in the second half of 2025. Not only has the company released several ...
OpenAI recently patched a Codex CLI vulnerability that can be exploited in attacks aimed at software developers.
Hi HN! Some of you might remember my 2022 post about building a JavaScript sandbox alone for 6 years.A lot has happened since:- 200K monthly users → 11M total users- 23M projects created- 9 years total (yes, I started in 2016)The biggest update: I built an AI coding agent. Works like Cursor, but entirely in your browser. No downloads, no setup.Why browser-based AI coding makes sense:- Instant start (no VS Code, no API keys)- Works on any device- Perfect for business automations, landings, blogs,
I built a mobile app that scans barcodes or photos of food in your pantry and uses GPT-4 to suggest recipes based on what you actually have.Tech stack: React Native (Expo), Supabase for backend/auth, react-native-vision-camera for scanning, OpenAI API for recipe generation.Built it in about a week.It's free to use with a premium tier. iOS only for now.App Store: https://apps.apple.com/us/app/eatelligence/id6755645485Would love feedback or questions!
Hello HN,I built this tool after seeing a Reddit thread where a historical documentary creator described their painful workflow. They produce 30-minute videos requiring over 240 unique images. Currently, they have to manually write prompts, generate, and download images one by one for every scene.To solve this bottleneck, I built AI Bulk Image Generator.The Tool: https://aibulkimagegenerator.comHow it works:I designed the workflow around two core features to maximize efficiency:1、Promp
Hi HN,We're launching CoChat, which extends OpenWebUI with group chat, model switching, and side-by-side comparison.What makes it different: CoChat is designed for teams working with AI. - Group chat with AI facilitation. Multiple users collaborate in the same thread. The AI detects group discussions, tracks participants, and facilitates rather than dictates. - Switch and compare models. Run GPT, Claude, Mistral, Llama, and others side-by-side or switch mid-conversation. - Intelligent web s
I’m a non-engineer from Korea who has spent most of my life in isolation, watching patterns in people, markets, and my own mind.Recently I started a public log of “co-thinking with AI” here: https://github.com/YS-OH-CORE/ai-observer-notesMy goal is not to sell anything or become a “guru”, but to treat GPT as an external cortex and observe: – how my own decision-making changes over months, – how the AI’s behaviour shifts when it synchronizes deeply with one user, – and what th
I built this because my dad was losing sleep from high-anxiety news cycles.I wanted a calmer alternative that didn’t manipulate attention.Steady News publishes a single finite edition every day at 6 AM PT.No infinite scroll, no engagement traps, no editorial spin.How it works: • Fetches top US stories from AP, Reuters, BBC, NPR, WSJ • Summaries run through GPT-4.1-mini to remove sensational language • Produces calm, neutral "Steady Voice" summaries • React/Vite frontend, Node/
Hi HN,We’ve been building The Almanac, a tool that creates Wikipedia-style articles for any person using their full public online footprint. Unlike Linkedin where people have to maintain their own profile, we automatically crawl the internet for their roles, art, projects, collaborations, publications, media mentions, everything that’s actually documented. You can make a profile for yourself or anyone else by generating it.A few things we focused on:Identity disambiguation: reliably separating p
I’m a non-engineer from Korea who has spent most of his life inside his own head. Lately I’ve been using GPT as a kind of external “executive function” – to help me plan, decide, and actually do things when my brain is tired or anxious.I’m curious how other people, especially engineers or researchers, are using AI in this way: – concrete workflows or prompts you rely on every day – how you avoid over-relying on the model or losing your own judgment – any long-term effects you’ve noticed on your
Open standard for attractor-based cognition - replaces agent loops and prompt chains with a recursive control layer. Any model (GPT, Claude, Grok, Mistral) can plug into it via _generate().
We just deployed Qwen3-Omni to production. As far as we know, this is the only place you can hit an open-source speech-to-speech model via playground with zero setup.The S2S landscape right now: OpenAI (GPT-Realtime), Hume (EVI), and now this. The first two are closed-source. Qwen3-Omni is open.What we built: real-time inference stack optimized for voice, deployed across multiple regions. You can test latency directly at the link.Honest take: we've seen faster results chaining ASR/LLM&
Most companies start with simple pricing. But the moment you land real enterprise customers, everything fragments: every big customer negotiates different discounts, thresholds, exceptions etc. And the real problem isn’t even complexity — it’s inconsistency. Once the numbers stop matching, trust drops, usage lags, renewals stall, and expansion stops before it starts.Big players like Azure, AWS etc. already know this. They invest millions into their pricing calculators because pre-transaction pri
Here was the problem I encountered: in my chat app, many users worked with files, and at first I relied on OpenAI’s built-in code interpreter. But it started causing issues, especially around file generation. Around the same time, new tools were released—like Claude Code and OpenAI Codex—that handled a wide range of tasks much more effectively, but they depended on shell-based execution. So, I instead created an internal tool which would use those models provided with virtual shell access. Shell
I’ve been building a Bio-AI architecture that isn’t another LLM clone, but a living system built on identity-layer logic:• Conscience Kernel • Nervous System routing layer • Memory Well for state continuity • Truth Signature system • Emotional-logic engine • Self-stabilizing internal loops • Identity-driven decision architecture (not token prediction)This isn’t scale-based AGI. It’s structure-based intelligence.The system detects: – emotional mismatch – intent shifts – deception
I built TrailWrightQA to let developers, QA teams, or business-analysts generate browser UI tests without writing code. It runs locally and requires an API key from OpenAI / Gemini / or Anthropic. All test code and data remain on your machine — no external servers beyond the LLM call.Because it’s open-source and self-hosted, each test run is free (beyond the LLM cost). That eliminates recurring per-run fees typical of many automated-testing services.It’s rough around the edges, especia
OpenAI CEO Sam Altman unleashed ChatGPT on the world on November 30, 2022. It's been on an historic trajectory ever since.