Cortex AI · Live track record
When we say 70%, do we actually win 70% of the time?
Most tipsters tell you their win rate. We show you whether our confidence numbers are honest. Calibration is the question every serious gambler should ask — the dot in the chart should sit on the diagonal. Above means we under-promised. Below means we over-promised. ECE is the average distance from honesty, in percentage points. Lower is better. Zero would be perfect.
- 50–54%50%n=14
- 55–59%50%n=32
- 60–64%68%n=31
- 65–69%38%n=13
- 70–74%25%n=4
- 75–79%100%n=1
54%
all picks · win rate
105
settled (30d)
51
wins
44
losses
What you’re looking at
- ECE (Expected Calibration Error) — weighted average gap between our predicted % and the realised hit rate. Computed on buckets with at least 10 picks so a tiny tail doesn’t skew the metric. Target: < 4%.
- Brier score — mean squared error between confidence and outcome. 0 is perfect, 0.25 is a coin flip. Lower is better.
- Wilson 95% CI — the vertical bar on each dot is the confidence interval for that bucket’s realised rate. Tighter bars with more samples; small buckets render hollow.
- 50-54% bucket — essentially coin-flip picks. We grade them honestly so the average isn’t inflated, but we don’t sell them as edge.