How pyscn-bot Works
Static analysis meets AI agent. Accurate reviews you can trust.
Code Quality in the AI Coding Era
AI tools like Copilot and Claude are generating more code than ever. Developers are shipping faster, but who's watching the codebase?
Technical debt accumulates silently. Complex functions, dead code, and duplicated logic slip through PR reviews and pile up over time.
You need a system that autonomously maintains your code. Not just reviewing PRs, but continuously monitoring your entire codebase.
pyscn-bot: Autonomous Code Maintenance
pyscn-bot doesn't wait for pull requests. It proactively scans your entire codebase on a schedule, finds architectural issues, and reports them before they become critical.
Code Audit
Weekly to DailyScans your entire repository weekly. Identifies complexity hotspots, dead code, duplications, and architectural issues. Creates a GitHub Issue with health score and prioritized recommendations.
PR Code Review
On ChangesAutomatically reviews pull requests. Catches problems in new code before they're merged. Complements the weekly audit with real-time feedback.
Built for Python
pyscn-bot uses pyscn, a static analysis tool designed specifically for Python. It parses the AST to understand your code deeply—not just pattern matching.
Python-specific issues
Mutable default arguments, circular imports, unused instance variables
Accurate analysis
AST-based parsing tracks variable scope and function calls precisely
Concrete metrics
"Complexity 18" instead of "looks complex". Numbers you can act on
Example Reviews
High Complexity Function
validate_and_process() has cyclomatic complexity of 18 (threshold: 15). Complex functions are harder to test and more prone to bugs.
Suggestion: Split into validate_schema(), validate_constraints(), and process_validated().
Same code in two places
Lines 45-62 in user.py and lines 23-40 in admin.py contain nearly identical code (87% match).
Suggestion: Move the shared logic to utils/validation.py. You won't have to fix the same bug twice.
How the AI Agent Analyzes Code
Deciding which tool to use next based on results
Unlike simple LLM wrappers, pyscn-bot's AI agent thinks before acting. It examines results, decides what to investigate next, and calls the right tool—just like a human reviewer.
Example: Analyzing a complex function
"Let me check the complexity first"
pyscn complexity ./src → utils.py has a function with complexity 18
"Complexity is high. Let me read the code"
read_file("src/utils.py") → validate_and_process() is 80 lines long
"This looks like duplicated logic. Let me check"
pyscn clones ./src → Same code exists in admin.py (87% match)
"Found the issues. Time to write suggestions"
Review comment generated
validate_and_process() has complexity 18 and duplicates code in admin.py. Extract shared logic to utils/validation.py.
Why this approach?
The AI agent doesn't stop at one analysis. It looks at results, thinks about what else might be wrong, and keeps investigating.
Just like a human code reviewer, it reads code and digs deeper when something looks off.
pyscn Analysis Tools
Complexity Measures cyclomatic complexity of functions Dead Code Finds unused functions, variables, imports Code Clones Detects copy-pasted code blocks Coupling Analyzes dependencies between modules Architecture Visualizes module dependencies and detects circular imports Your Code is Safe
We understand security concerns. Here's exactly what happens to your code:
Sent to Claude API
Your code is sent to Anthropic's Claude API for analysis. Anthropic does not use API data to train models.
Not Stored
We don't store your code. Once the review is generated and posted to GitHub, the code is discarded.
GitHub Permissions
pyscn-bot only requests permissions it needs: read code, write PR comments, create issues for audit reports.
Try It on Your Repository
Install pyscn-bot and create a PR. See the difference in your first review.
Install Free