Tag: Model

  • Round 5: 6 from 9, Season Sits at 31/45 (69%)

    Gather Round delivered the usual chaos. Six from nine is fine. The three high-confidence calls all went in, which is what you want from the model — hold firm on the certainties and accept that the coin-flips will go 50/50 over time. This week the coin-flips went badly.

    Round 5 Results

    Adelaide 114 def Carlton 86Predicted: Adelaide (61%)

    Adelaide at home against Carlton and the model had them at medium confidence. They won by 28. Carlton have now lost to Richmond, Adelaide twice in three weeks — a team that was supposed to be pushing for finals is looking very ordinary. Adelaide are quietly building something; two wins in Gather Round week with this scoreline is not nothing.

    Fremantle 45 def Collingwood 39Predicted: Fremantle (51%)

    The model called it a coin-flip, and a coin-flip is exactly what it was. 84 total points across 120 minutes of football — one of the lowest-scoring games in recent memory. Fremantle scraped through by 6. Both teams looked like they’d rather be anywhere else. The model gets credit for the right team, no credit for insight.

    Brisbane Lions 92 def North Melbourne 66Predicted: Brisbane Lions (76%)

    The first of three high-confidence hits. Brisbane at 76% over North Melbourne in Barossa Park, and they won by 26. North’s winning streak — three from four entering Round 5 — was always going to face a tougher test eventually. Brisbane are a genuine contender and the model is starting to show it. North still look good relative to where they were in 2025, just not at this level yet.

    Essendon 113 def Melbourne 68Predicted: Melbourne (51%)

    The model called this one a coin-flip, edging Melbourne by a hair. Essendon won by 45. That’s not a coin-flip result — that’s a team that was significantly better on the day. Essendon have now beaten what the model considered the more likely winner two weeks running; this was an Anzac eve game in Adelaide and they were completely dominant. Melbourne, meanwhile, are in a strange place: they’ve been both better and worse than expected on different weeks. Their form is genuinely hard to read.

    Sydney 100 def Gold Coast 68Predicted: Sydney (58%)

    Low confidence, right result. Sydney continue to look like the best team in the competition. Four wins from five, and this was one of the more comfortable victories despite the modest probability. Gold Coast had a good Opening Round but have now lost three of their last four. The model’s view on Gold Coast is dropping and will drop further with this data feeding in.

    Hawthorn 104 def Western Bulldogs 64Predicted: Western Bulldogs (51%)

    The model had this as a 50/50, leaning fractionally to the Bulldogs. Hawthorn won by 40. That’s a hammering. The Bulldogs’ high-confidence win over Essendon in Round 4 now looks less impressive given Essendon just beat Melbourne by 45. Hawthorn are three wins from five and the model hasn’t fully processed how good this team is — the 2025 data is pulling the number down. Sound familiar? Same issue as Melbourne on the other side. Some teams have changed more than the rolling window can capture.

    Geelong 122 def West Coast 76Predicted: Geelong (96%)

    The highest-confidence call of the round landed easily. Geelong by 46. West Coast are not a functional AFL team at this point — they’ve now been beaten by 53 points (Round 3), 128 points (Round 4 vs Sydney), and 46 points in consecutive away games. The model doesn’t need to know much to call these correctly. West Coast are an anchor on the model’s high-confidence accuracy in the good direction.

    Greater Western Sydney 131 def Richmond 75Predicted: GWS (96%)

    The second 96% call of the round, equally correct. GWS by 56 in Barossa Park. Richmond are in a similar situation to West Coast — a rebuilding side who the model correctly identifies as heavy underdogs every week. GWS are genuine premiership contenders and this confirmed it. Four wins from five and beating teams comfortably.

    St Kilda 81 def Port Adelaide 67Predicted: Port Adelaide (58%)

    Port Adelaide at low confidence and they lost at home — well, at Adelaide Oval — to St Kilda by 14. St Kilda are now two wins from their last three after a rough start to the year. Port were coming off a dominant win over Richmond and expected to handle St Kilda. They didn’t. The model had this as a low-confidence call and it missed; that’s acceptable. What’s less acceptable is that this is the second time Port have lost a game the model thought they’d win comfortably — the first was the 2-point loss to West Coast in Round 3. Something to watch.


    Season to Date: 31/45 (69%)

    Round Correct Total Accuracy
    Opening Round 3 5 60%
    Round 1 7 9 78%
    Round 2 6 7 86%
    Round 3 4 7 57%
    Round 4 5 8 63%
    Round 5 6 9 67%
    Season 31 45 69%

    The season sits at exactly 69% — it has barely moved in three weeks. The high-confidence accuracy is the steadier number: three from three this round, and the overall high-confidence record remains strong at 82% across the season. The model’s problem is in the low-confidence band, where the coin-flips are running below 50%.


    Model Observations

    Hawthorn are better than the model thinks. Two wins this round — the 1-point thriller over Geelong in Round 4 and a 40-point thumping of the Bulldogs in Round 5. The model keeps calling their games as 50/50 or slight underdogs. The 2025 season data is dragging the number down. Until the window clears, treat Hawthorn predictions with scepticism and add a mental nudge upward.

    Melbourne’s form is genuinely volatile. They beat Gold Coast and Carlton (both upsets), lost badly to West Coast in Round 4, and then lost by 45 to Essendon in Round 5. The model doesn’t know what to make of them because the data doesn’t either. This is a team in transition — the 2025 rolling average reflects a bad year, but the 2026 version is inconsistent rather than simply improved. Wide error bars on any Melbourne prediction.

    Essendon are emerging. Two straight wins, both convincing. The model called both games as low-confidence coin-flips and they won decisively. This is a team improving faster than the data is catching up. Back them at medium confidence until the numbers say otherwise.

    Port Adelaide’s losses are a pattern now. West Coast by 2 in Round 3. St Kilda by 14 in Round 5. Both were games the model expected Port to win. Their dominant wins (Richmond in Round 4, Essendon in Round 2) are real, but so are these failures. They’re a boom-or-bust team and the model treats them as consistent. Worth knowing.

    Round 6 tips are live. Nine games, with Hawthorn vs Port Adelaide as the high-confidence call at home. The model is watching Melbourne, Essendon, and Hawthorn closely — the data will start catching up over the next few weeks.

  • Under the Hood: Why I Rebuilt the Prediction Model from Scratch

    The Original Model Was Good. Not Good Enough.

    When I launched Footy Science at the start of 2026, the prediction engine was a logistic regression model — a well-understood statistical technique that takes a bunch of numbers about how two teams have been playing and spits out a probability of who wins.

    It was trained on more than a decade of AFL data. It was calibrated carefully so that when it said “65% chance the home team wins,” that was actually true about 65% of the time. It achieved around 69% accuracy on matches it had never seen before. That’s a solid baseline.

    But it had a fundamental limitation I couldn’t shake: it was a single number in, single number out machine. Every match got reduced to one probability. No sense of how a team was winning. No ability to explain why it liked one team over another in any meaningful way. Just: “we think Carlton wins, 61%.”

    After five rounds of the 2026 season I rebuilt it. This post explains what changed and why.


    The Problem with One Big Number

    The old model worked roughly like this. Take about a dozen statistics — things like disposal rate, metres gained, recent wins, travel disadvantage — and combine them into a single score. Higher score means higher chance of winning.

    The stats get squashed together into a weighted average. A team that’s been moving the ball brilliantly but losing contested possessions looks identical to a team that’s winning the ball but going nowhere with it, as long as their combined score is the same.

    That’s not how football works.

    A game of AFL has distinct phases. There’s the contest for the ball — who wins it, how efficiently, how often under pressure. There’s field position — who’s pushing the play forward into dangerous territory. There’s chance creation — how many shots are being generated. There’s conversion — how clinical each team is when they get their opportunities. And there’s momentum — recent form, experience, structural advantages like travel.

    These phases are related but they’re not the same thing. A team can dominate possession and lose. A team can get outplayed in the midfield but kick straight and win. The old model couldn’t see any of that structure. It just saw the average.


    What the New Model Does Differently

    The new model is a neural network — specifically, what I’ve called a Phase Model.

    Rather than throwing all the statistics into one blender, it divides them into five groups based on what part of the game they measure. Disposals, efficiency, contested possessions, and clearances go into the Accumulation phase. Metres gained and inside 50s go into Territory. Score involvements and scoring shots go into Chance Creation. Shot accuracy and average winning margin go into Conversion. Recent wins, experience, and travel go into Momentum.

    Each phase has its own small sub-model that processes only those statistics. It produces a score for each phase — a number that reflects which team has the advantage in that dimension of the game. Then all five phase scores feed into a final layer that produces the prediction.

    The result isn’t just a win probability. It’s also a predicted winning margin and an estimate of how uncertain that margin is. And because each phase produces its own score, you can see where the model thinks the game will be won or lost.


    Predicted Margin — What That Actually Means

    The old model told you: “we think Geelong wins, 72%.”

    The new model tells you: “we think Geelong wins by around 18 points, give or take 32.”

    That second form is much more useful. A 72% win probability with a margin of +18 ± 32 describes a game that could genuinely go either way on the day — the model is confident about the direction but honest about the noise. That’s different from a 72% win probability with a margin of +22 ± 14, which describes a game the model thinks Geelong probably controls.

    The uncertainty figure (the ±) comes from the model learning not just who wins but how predictable the match is. Some matchups are structurally lopsided. Others look close on paper but have high variance — weather, a single contested free kick, a freakish goal. The model tries to capture that spread.


    Player Influence

    One thing I wanted to be able to show, and couldn’t with the old model, is which players are actually driving the prediction.

    The Phase Model attributes each team’s phase scores back to individual players, based on how much each player’s rolling statistics contribute to that phase and how sensitive the model is to those statistics right now. The result is a relative influence ranking — you can see which players are moving the needle most in each phase of the game.

    This is an approximation, not a precise measurement. It works best for the four statistical phases (Accumulation, Territory, Chance Creation, Conversion) and is excluded for Momentum, which is a team-level concept. But it gives you something the old model could never produce: a human-readable story about why the model thinks what it thinks.


    Does It Actually Predict Better?

    On historical test data, the new model achieves around 68% accuracy — roughly the same as the old one.

    So why bother?

    Two reasons.

    First, test accuracy on historical data is a noisy measure. Both models were evaluated on matches from 2012 to 2025, where the feature quality is consistently good. The Phase Model’s architecture — processing phases independently — should make it more robust to the messier, partial data you get during a live season, particularly early in the year when rolling averages haven’t settled.

    Second, the Phase Model’s predictions are better calibrated to close games. The old model had a tendency to treat tight matchups as coin flips. The Phase Model produces predicted margins with uncertainty estimates, which means it can express “this looks close, but the structure of the game favours the home team” in a way the old model simply couldn’t.

    Early 2026 results back this up. Through five rounds the Phase Model retrospective accuracy is around 72%, compared to 69% for the logistic model on the same games.


    What Stayed the Same

    The underlying data hasn’t changed. The same rolling player and team statistics that powered the old model power the new one — disposals, fantasy scores, metres gained, score involvements, and so on. The same FootyWire scraping pipeline. The same weekly retrain after each round.

    The historical predictions on the site — every round from 2013 to 2025 — are still the original logistic model’s output. Those were genuine pre-round predictions and I’m not going to retroactively replace them with a model that didn’t exist at the time.

    For the 2026 season, Opening Round through Round 5 also show the original logistic predictions, for the same reason. From Round 6 onwards, everything is the Phase Model.


    The Honest Caveats

    Neural networks are less interpretable than logistic regression. With the old model I could look at the coefficients and tell you exactly which features mattered most. With the Phase Model, the relationship between inputs and outputs is more complex.

    I’ve mitigated this with the phase scores and player influence panels — but those are approximations of the model’s reasoning, not a direct readout of it. There’s a real trade-off between predictive power and interpretability, and I’ve moved slightly in the direction of power.

    The model also still doesn’t know about late team changes, weather, or anything that happens after Thursday night selections. That limitation is unchanged. The Phase Model is smarter about what it knows; it’s still blind to what it doesn’t.


    What This Means for You

    If you’re using the predictions to tip, the main practical change is that you now have more to go on than a single probability. The margin estimate and uncertainty figure tell you how confident the model really is — a 65% prediction with ±15 points uncertainty is a very different bet from a 65% prediction with ±40 points uncertainty.

    The phase breakdown tells you where the model expects each team to have the advantage. If you think the Territory phase is wrong — maybe you know one team’s key forward is injured — that’s a concrete reason to override the model’s prediction rather than just a vague feeling.

    The player influence panel shows you who the model is leaning on. If one of those players is actually listed as a late out, that’s a signal the prediction might shift significantly once lineups are updated.

    All of that is new. None of it was available with the old model.


    One Last Thing

    The old model did its job. It was honest, it was well-calibrated, and for what it was, it worked.

    The new one is a genuine upgrade — not because it’s more complicated, but because football is more complicated than a single weighted average, and the model now reflects that.

    Every round is a new test.

  • Round 4: 5 from 8, Season Sits at 25/36 (69%)

    Round 4 felt about right, which is another way of saying it was frustrating. Five from eight is a competent week, not a good one. The misses were spread across confidence bands, and one of the two high-confidence calls went wrong. The season tally ticks down another point to 69%.

    Round 4 Results

    Brisbane Lions 119 def Collingwood 65Predicted: Brisbane Lions (58%)

    The model barely had Brisbane at low confidence, but they won by 54. Collingwood looked flat — their scoring has been inconsistent all year and Brisbane exposed it. A comfortable result that looks better than the prediction implied. Brisbane are building into genuine form.

    North Melbourne 96 def Carlton 86Predicted: North Melbourne (61%)

    North Melbourne again. A 10-point win over Carlton at Marvel Stadium, their third win from four games. The model had them at medium confidence this time, which is a shift from where things stood a few weeks ago — 2026 results are starting to feed in and North’s improved form is showing up. Carlton continue to look brittle when the pressure goes on.

    Fremantle 78 def Adelaide 76Predicted: Fremantle (54%)

    A 2-point win on the road at Adelaide Oval. The model had Fremantle at low confidence and they scraped through by the barest margin. This is the kind of result that goes either way — credit to Fremantle for holding on, but don’t read too much into it as a signal about either team’s quality.

    Port Adelaide 90 def Richmond 48Predicted: Richmond (61%)

    The biggest miss of the round. The model had Richmond as medium-confidence home favourites at the MCG, and Port Adelaide won by 42. Richmond are in rebuild and the model is perhaps overweighting MCG home advantage for a side with limited quality. Port Adelaide, after their 2-point loss to West Coast in Round 3, bounced back hard. This was the wrong tip, and the margin made it look worse.

    Sydney 163 def West Coast 35Predicted: Sydney (63%)

    The model had Sydney at medium confidence. The actual margin — 128 points — was one of the biggest in recent memory. West Coast were uncompetitive from the first quarter. Hard to know what to do with a result like this from a modelling standpoint; the tip was right but the scale of the win suggests something was badly off with West Coast on the day. Sydney look like a serious team.

    Melbourne 109 def Gold Coast 89Predicted: Gold Coast (76%)

    The high-confidence miss. The model had Gold Coast at 76% and Melbourne won by 20 at the MCG. Melbourne have now beaten Carlton and Gold Coast in back-to-back weeks — both times the model expected them to lose. Something is happening with Melbourne that the rolling averages haven’t caught yet. Their 2025 finish was poor and that’s dragging the prediction down, but the 2026 version of this team looks different. Worth watching closely.

    Western Bulldogs 99 def Essendon 65Predicted: Western Bulldogs (91%)

    The high-confidence hit. Western Bulldogs at home against Essendon at 91% is a big call, and it paid off easily. A 34-point win in a game that was never close. The Bulldogs are the model’s most trusted team right now, and they’re backing it up on the field.

    Hawthorn 92 def Geelong 91Predicted: Geelong (51%)

    One point. The model had Geelong at 51% — a complete coin flip — and Hawthorn won by 1. There is nothing more to say about this one. Both calls were equally defensible, and one team kicked a point more. Over a long season these average out. In Round 4, it went against us.


    Season to Date: 25/36 (69%)

    Round Correct Total Accuracy
    Opening Round 3 5 60%
    Round 1 7 9 78%
    Round 2 6 7 86%
    Round 3 4 7 57%
    Round 4 5 8 63%
    Season 25 36 69%

    The season accuracy has drifted from a peak of 76% after Round 2 down to 69% now. The model had a strong start and has had two solid-but-not-great weeks since. Nothing is broken — 69% over 36 games is a respectable baseline — but the early-season optimism has been replaced by something more measured.


    Model Observations

    Melbourne are outperforming expectations. Two straight upsets against teams the model expected to beat them. Melbourne’s 2025 rolling data is a drag on their predicted probability, and it’s increasingly clear that data doesn’t represent what the 2026 team looks like. This will self-correct as more games accumulate, but in the meantime, treat Melbourne tips with some scepticism.

    High-confidence accuracy is under pressure. The model has had three high-confidence calls this season: Fremantle in Round 3 (✓), Western Bulldogs in Round 4 (✓), and Gold Coast in Round 4 (✗). One miss from three isn’t alarming, but the Gold Coast miss was large — Melbourne won by 20 — which suggests the model wasn’t just unlucky, it was structurally wrong on that matchup.

    North Melbourne are a real team in 2026. Three wins from four. The model is catching up but still probably underrates them at the margins. Back them at medium confidence until the data says otherwise.

    The 1-point Hawthorn result is noise. Hawthorn-Geelong at 51% is the model saying it doesn’t know. Hawthorn won by 1. File that under the category of results that will go either way half the time and move on.

    Round 5 tips are up. Nine games next week — the model will have Round 4 data feeding in now.

  • Round 3: 4 from 7, and the Model Takes a Hit

    Round 3 was a reality check. After back-to-back solid rounds, the model went 4 from 7 — its worst week of the season. The misses weren’t flukes or high-confidence blowouts; they were medium-confidence calls that fell just the wrong side of the line.

    Round 3 Results

    Geelong 68 def Adelaide 60Predicted: Geelong (62%)

    A low-scoring, scrappy game that went with the tip. Geelong won by 8 in what was effectively a coin-flip — the model had it right but there wasn’t much to be confident about. Adelaide continue to look inconsistent, and their underlying metrics aren’t convincing.

    Collingwood 87 def Greater Western Sydney 54Predicted: Collingwood (62%)

    Comfortable for Collingwood in the end, despite the model only having them at medium confidence. A 33-point win is a stronger result than the tip implied. GWS have now lost two in a row after their Round 2 heartbreaker against St Kilda — their form metrics may be starting to catch up with their actual results.

    Brisbane Lions 113 def St Kilda 80Predicted: Brisbane Lions (51%)

    The model barely had an opinion here — 51% is as close to a coin flip as it gets. Brisbane won comfortably by 33, which makes it look more decisive than the prediction suggested. St Kilda played well enough in patches but couldn’t stay with Brisbane in the final quarter.

    Fremantle 103 def Richmond 43Predicted: Fremantle (91%)

    The model’s most confident call of the round, and it delivered. Fremantle at home against a Richmond side in rebuild is about as reliable as AFL tipping gets right now. A 60-point margin made it look easy.

    North Melbourne 81 def Essendon 69Predicted: Essendon (58%)

    The first miss, and an uncomfortable one. The model had Essendon as low-confidence favourites, but North Melbourne won convincingly by 12. North are now 2-1 after wins over Port Adelaide and Essendon — they’re a better team than last year’s data suggests, and the model is slow to pick that up. This will self-correct as 2026 results accumulate.

    West Coast 92 def Port Adelaide 90Predicted: Port Adelaide (66%)

    The cruellest result of the round. Port Adelaide were medium-confidence favourites and lost by 2 points. A margin of 2 is basically a coin flip at the final siren, and the model can’t be blamed for not seeing that coming. West Coast’s ability to grind out close results at home is real, and a 2-point loss is the kind of outcome that happens regardless of model quality. Still stings.

    Melbourne 100 def Carlton 77Predicted: Carlton (62%)

    The miss that stings most, because Carlton were at home and the model had them as medium-confidence favourites. Melbourne won by 23 — that’s not a close game. Melbourne’s form has been building quietly, and the model’s Carlton lean may reflect 2025 data more than the current state of either team. Carlton have looked brittle in patches this season, and the model hasn’t fully registered that yet.


    Season to Date: 20/28 (71%)

    Round Correct Total Accuracy
    Opening Round 3 5 60%
    Round 1 7 9 78%
    Round 2 6 7 86%
    Round 3 4 7 57%
    Season 20 28 71%

    The season tally drops from 76% to 71% after Round 3. The trajectory that looked encouraging after Round 2 has flattened. High-confidence accuracy remains strong at 83% — the Fremantle tip was the only high-confidence call this week, and it landed easily. The problem is the medium-confidence band: three from four in that bracket this week, and two of those went wrong.


    Model Observations

    Three medium-confidence misses in one round is a problem. The model had Essendon, Port Adelaide, and Carlton all at 58–66% — confident enough to tip, but not dominant. All three lost. That’s not a catastrophic failure — at 62%, the model is effectively saying it will be wrong roughly 4 times in 10 — but losing all three in the same round hurts the weekly tally badly.

    North Melbourne are outperforming their history. This is the clearest structural issue emerging. North Melbourne beat Port Adelaide in Round 1 and Essendon in Round 3. The model is leaning on rolling averages that include a poor 2025 season, and those aren’t representative of what North look like in 2026. The more results that come in, the faster the model will correct — but for now, North are a team to treat with more caution than the model currently does.

    West Coast at home in close games. The Port Adelaide loss was a 2-point result. The model can’t predict margins that tight, and over a long season those will even out. It’s worth noting though that West Coast have now won both their home games this year — and neither was dominant on paper.

    Melbourne’s form is real. A 23-point win over Carlton at home is not a fluke. Melbourne have looked like a different team to the one that ended 2025, and the model’s reliance on 2025 rolling data may be underselling them. Keep an eye on this.

    Round 4 tips are up. Eight games next week, with the model’s updated Round 3 data now feeding in. A better week ahead — hopefully.

  • Round 2: 6 from 7, and the Model is Finding Its Feet

    Round 2 is done, and the model had its best week of the season so far: 6 from 7, with only a tight St Kilda upset at ENGIE Stadium escaping the net.

    Round 2 Results

    Hawthorn 99 def Sydney 82Predicted: Hawthorn (54%)

    A low-confidence tip that landed. The model flagged Sydney’s travel disadvantage as the primary factor, and with a 17-point margin, it was right to. Sydney had the stronger underlying metrics — better metres gained, more scoring shots — but couldn’t overcome the cross-country trip. Worth watching how Sydney travel for the rest of the season.

    Western Bulldogs 94 def Adelaide 88Predicted: Western Bulldogs (60%)

    A tight game that went with the tip. Adelaide were the home side but the model wasn’t convinced by their form, and the Bulldogs held on by 6. Adelaide’s form metrics have been inconsistent — they’re a team the model isn’t reading with much confidence yet.

    Gold Coast 128 def Richmond 60Predicted: Gold Coast (63%)

    Gold Coast are making a habit of big wins. 68 points is an emphatic result, and the model had them as medium-confidence favourites. Richmond look like they’re going to be a consistent source of easy tips in 2026.

    St Kilda 78 def Greater Western Sydney 74Predicted: GWS (88%)

    The miss of the round, and worth dwelling on. The model was highly confident here — 88% for GWS — and got it badly wrong. St Kilda won by 4 in what was the model’s most confident incorrect prediction of the season. GWS’s metrics coming in were strong across the board; St Kilda’s were not. A reminder that high confidence doesn’t mean certainty, and that St Kilda under new management may be a harder team to read than last year’s data suggests.

    Fremantle 118 def Melbourne 70Predicted: Fremantle (59%)

    Fremantle at Optus Stadium is a reliable proposition, and the model had them as low-confidence favourites. A 48-point win made it look easier than the prediction suggested. Melbourne’s travel disadvantage was a noted factor, and they were well beaten.

    Port Adelaide 133 def Essendon 70Predicted: Port Adelaide (66%)

    Port Adelaide are looking like a genuine force. A 63-point demolition of Essendon was the result of a team operating at a different level. The model had them at medium confidence — the margin blew that well away.

    West Coast 111 def North Melbourne 94Predicted: West Coast (59%)

    A low-confidence home tip that held. West Coast won by 17, which is more comfortable than the prediction implied. North Melbourne had better form metrics on paper but couldn’t back it up away from home.


    Season to Date: 16/21 (76%)

    Round Correct Total Accuracy
    Opening Round 3 5 60%
    Round 1 7 9 78%
    Round 2 6 7 86%
    Season 16 21 76%

    The trend is encouraging — each round has been an improvement on the last. Opening Round was a rough start (the Gold Coast demolition of Geelong was the standout miss), but the model has found better footing since.


    Model Observations

    A few things worth noting after three rounds.

    Travel is doing real work. The travel disadvantage feature has been a factor in several correct predictions — Sydney to Melbourne, Melbourne to Perth, Brisbane to Sydney. The model assigns a binary penalty for cross-country trips, and it’s holding up. Teams that travel interstate are losing at a higher rate than the model expected coming in, which suggests it may even be underweighting this factor slightly.

    High-confidence misses are the problem. Both significant misses so far — the Gold Coast/Geelong blow-out in Opening Round and the GWS/St Kilda result in Round 2 — came from predictions where the model was confident. That’s the worst kind of wrong. The model’s calibration at the high-confidence end warrants watching; if it continues to miss badly on 80%+ predictions, that’s a structural issue with how it’s reading dominant teams.

    New-look teams are hard to read. St Kilda, Adelaide, and Richmond are all in transition, and the model is leaning heavily on 2025 rolling averages for those squads. That data may not reflect what these teams actually are in 2026. This is an inherent limitation — the model needs to see the new version of a team before it can properly weight it.

    The model likes home ground. Six of the seven Round 2 correct tips were home team wins. The model isn’t blindly picking home teams — it had GWS at home as its biggest miss — but home ground advantage is baked in via venue travel metrics, and it’s generally holding.

    Round 3 tips are up. Seven games, with Fremantle vs Richmond shaping as the most lopsided matchup on the card.

  • Flying Blind: Why This Model Ignores the Betting Markets

    The Dirty Secret of Most Prediction Models

    Here’s something that doesn’t get talked about enough: a lot of “prediction models” you see floating around aren’t really predicting anything. They’re laundering the betting market’s opinion through a spreadsheet and calling it analysis.

    The market opens on Monday morning. Punters pile in. The odds shift. The model ingests those odds as a feature. The model now “predicts” something very close to what the market already said. Everyone goes home feeling clever.

    This model doesn’t do that.


    What the Markets Know (A Lot)

    To be clear: the betting markets are genuinely impressive. They aggregate information from thousands of punters, professional analysts, injury tipsters, and people who probably have a guy inside the club. The line moves fast when a key player gets a late scratch. It adjusts for weather, travel, and the specific psychological state of teams coming off a big win or a demoralising loss.

    Markets are efficient in the way that makes economists happy. They’re not perfect, but they’re rarely stupid.

    So yes — ignoring them has a cost. A model that uses market odds as an input will almost certainly be more accurate than one that doesn’t, all else being equal. The market knows things this model doesn’t. That’s just true, and pretending otherwise would be dishonest.


    So Why Not Just Use Them?

    Because that’s not the question I’m trying to answer.

    If I wanted to know who was most likely to win a given game, I’d look at the odds. Done. No model required. The markets have already done the hard work, pooled vast amounts of information, and distilled it into a probability. It’s probably pretty close.

    What I’m actually interested in is: what does the football tell us? Not what the market thinks, not what the punting public thinks — what do the underlying statistics of how teams have actually been playing suggest about what’s likely to happen next?

    That’s a different question, and it requires a model that’s genuinely blind to market opinion.


    The Philosophy (Bear With Me)

    Think of it this way. You can build a model that predicts temperature using a weather forecast as an input. It will be highly accurate. It will also be completely useless as a scientific instrument, because all it’s doing is repeating someone else’s forecast back at you.

    Or you can build a model that predicts temperature using atmospheric pressure, humidity, historical patterns, and satellite imagery — no forecast allowed. It might be less accurate in the short run. But it’s actually doing something. It’s making independent claims about the world based on underlying signals, not just echoing consensus.

    This model is the second kind. It looks at how teams have been moving the ball, how their players have been performing, who’s travelling interstate, who’s got an inexperienced lineup. It doesn’t know what Tab.com.au thinks about any of it.

    That means when it disagrees with the market, it’s a genuine disagreement — not noise.


    The Advantage: Finding the Market’s Blind Spots

    Because this model forms its opinions independently, it occasionally sees things the market doesn’t weight heavily enough.

    Travel is a good example. The market prices in some interstate disadvantage, but it’s a blunt adjustment. This model calculates actual travel distances and applies a penalty based on historical evidence. Sometimes that surfaces something the odds haven’t fully captured.

    Player experience is another. When a team’s lineup is unusually young — a lot of players under 50 games — the model applies an inexperience penalty based on rolling individual game counts. Markets probably notice the obvious cases (a team suddenly fielding four rookies), but the subtler version of this effect tends to get washed out in the noise.

    The model’s opinion is formed entirely from what happened on the field, over the last handful of weeks. If the market is pricing something else — reputation, interstate crowd size, the narrative around a particular coach — then the model is ignoring all of that. Which is either a bug or a feature depending on what the market has gotten wrong lately.


    The Honest Part

    This approach comes with real costs, and it would be silly to pretend otherwise.

    Late team changes — particularly the Thursday night selections that get announced after the TAB has already adjusted — mean the model is sometimes working with outdated lineup information. The market has already moved. This model is still thinking about last week’s 22. Run make lineups and make predict after selection Thursday and that narrows the gap, but it doesn’t close it entirely.

    There’s also a deeper issue: some things that predict football outcomes aren’t cleanly measurable in statistics. Team morale. Player confidence. A coach who’s about to get sacked. The market absorbs some of that. This model, emphatically, does not.

    So when the model and the market disagree, there are two possibilities: either the model has found something the market missed, or the market knows something the model doesn’t. Both happen. The trick is figuring out which.


    The Bottom Line

    Most tipping models are glorified odds converters. This one is trying to do something different — build an independent view of what the football says, without peeking at the consensus.

    That makes it less accurate in absolute terms than a model that just launders market opinion. It also makes it more interesting. When it’s right despite disagreeing with the market, that’s a genuine signal. When it’s wrong, at least you know it failed on its own terms, not because it was following someone else’s homework.

    Flying blind has costs. But it also keeps the lights on.

  • Under the Hood: Model Improvements for 2026

    A Better Picture of Form

    Going into 2026, I’ve made the most significant changes to the prediction model since the site launched. The short version: the model now has a better sense of how teams have been playing, not just whether they’ve been winning.

    Previously, the main form signals came from things like recent win rates, scoring margins, and player quality (measured through fantasy scoring). These are solid indicators, but they miss something. A team can win a few scrappy games and look fine on paper while actually playing pretty poor football — and vice versa. A team can lose a close game while dominating possession and territory.

    The new version of the model picks up on some of those subtler signals.


    What’s New

    How far teams move the ball. One of the new inputs tracks how much territory a team’s players gain with the ball in hand — not just whether they’re taking possessions, but whether those possessions are actually advancing the team forward. A team that consistently pushes the play into attacking positions tends to create more scoring opportunities, and the model now accounts for that.

    How often teams are involved in scores. This measures how many players in a team’s lineup are regularly contributing to scoring chains — the sequences of kicks and handballs that lead directly to a goal or behind. It’s a sign of a team playing connected, structured football rather than relying on individuals to do everything.

    Quality of ball use. Two sides of the same coin: how often players use the ball effectively, and how often they give it straight back to the opposition through poor decisions. Raw disposal counts have always been in the model, but this new layer separates the clean, purposeful ball use from the messy stuff.


    Why It Matters

    All four of these measures are calculated as rolling averages across each team’s last six games, then compared against the opposing team. A large gap in any of these areas tends to be a meaningful predictor of the result — more so than the raw score from last week.

    The model still uses all the same signals it always has (recent wins and margins, player experience, travel disadvantage, scoring shot rates). The new features sit alongside them, giving a fuller picture.


    A Note on the Numbers

    Test accuracy on historical data came in at 69.4% with the new model — a solid improvement over previous versions. That figure comes from games the model had never seen during training, so it’s a genuine out-of-sample measure rather than a self-congratulatory one.

    For the top 30% of predictions where the model is most confident, historical accuracy is considerably higher. Those are the picks worth paying most attention to.


    Round 1 predictions are coming. The new model has Opening Round data to work with, so it’s already picking up on how teams looked in the first week back.