Tier 4
Series 304
304C: Utility Theory
"Binary logic (Yes/No) is robotic. "If hungry, eat." But what if you are hungry AND under fire? A Navigator uses Utility Theory to weigh options and choose the "Best" action, not just the "First" one."
The Concept: Utility AI
Instead of `if/else`, every possible action is given a **Score** (0 to 1).
* **Factors:** Health, Distance, Ammo, Threat.
* **Curve:** `Hunger * 0.8 + Safety * 0.2`
* **Selection:** The AI picks the action with the highest total score.
* **Factors:** Health, Distance, Ammo, Threat.
* **Curve:** `Hunger * 0.8 + Safety * 0.2`
* **Selection:** The AI picks the action with the highest total score.
Red Flag Detected
The AI Trap: "The Priority List"
You ask the AI: "Decide whether to heal or shoot."
// AI-Generated Code: Rigid If-Else if (health < 30) Heal(); else if (canSeePlayer) Shoot(); // Audit Fail: What if health is 29 but the enemy is 1 HP away from dying? // The AI will stupidly heal instead of finishing the fight.
This is "Threshold Blindness." Rigid numbers prevent the AI from making contextual, smart decisions.
Elite Telemetry
Research shows "Elite" teams achieve 15% faster lead times by keeping AI on a "very tight leash."
- Small Batches Solving one problem at a time prevents logic drift.
- Modular Design Localizing the "blast radius" of AI changes.
- Tight Loops Rapid iteration with constant code review.
The Navigator's Correction
Corrective Protocol
// Corrected: Fuzzy Logic float healScore = (1 - healthPct) * 0.8f; float killScore = (1 - enemyHealthPct) * 1.2f; if (killScore > healScore) Shoot(); else Heal();
Your Pilot Command
> A skilled Navigator directs the AI to use Scoring.