AI brokers embedded in CI/CD pipelines could be tricked into executing high-privilege instructions hidden in crafted GitHub points or pull request texts.
Researchers at Aikido Security have traced the issue again to workflows that pair GitHub Actions or GitLab CI/CD with AI instruments reminiscent of Gemini CLI, Claude Code Actions, OpenAI Codex Actions or GitHub AI Inference. They discovered that unsupervised user-supplied strings reminiscent of challenge our bodies, pull request descriptions, or commit messages, might be fed straight into prompts for AI brokers in an assault they’re calling PromptPwnd.
Depending on what the workflow lets the AI do, this may result in unintended edits to repository content material, disclosure of secrets and techniques, or different high-impact actions.







