Hackathon reporting surface
Hackathon reporting dashboard
This page is the warehouse view for the hackathon. It either shows the modeled BigQuery story or, when the export is empty, the exact evidence for why the warehouse is still blank.
About this page
Generated 10 May 2026, 07:01 UTC
Live mode is reading directly from the dedicated hackathon_reporting dataset in BigQuery, scoped to 27 March 2026.
Metric and field definitions
Plain-English definitions for every warehouse metric and field used on this page.
Metric and field definitions
Plain-English definitions for every warehouse metric and field used on this page.
Warehouse rowsderivedOpen
Total landed rows across the dedicated hackathon_reporting warehouse tables.
How to read it
If this is zero, the BigQuery dashboard should be treated as a warehouse-status page, not an analytics story.
Warehouse tables with dataderivedOpen
How many modeled warehouse tables currently contain data.
How to read it
This is the quickest way to tell whether any part of the reporting model has started landing.
Raw GA4 export tablesderivedOpen
How many raw GA4 export tables currently exist in the analytics_498363924 dataset.
How to read it
If this is zero, the break is upstream of modeling and the raw export has not landed at all.
Daily BigQuery exportderivedOpen
Whether the live GA4 BigQuery link has daily export enabled.
How to read it
Enabled here but zero raw tables means the issue is not just a dashboard-query bug.
Streaming BigQuery exportderivedOpen
Whether the live GA4 BigQuery link has streaming export enabled.
How to read it
Enabled here but zero raw tables means intraday export is configured yet still not landing.
More derived metrics (9)
Modeled eventsderivedOpen
All modeled analytics events returned for the dashboard reporting window.
How to read it
Use this as BigQuery-modeled event volume, not as a vote total.
Modeled usersderivedOpen
Distinct modeled users returned for the dashboard reporting window.
How to read it
Use this as the modeled audience size for the event day.
Recorded votesderivedOpen
Votes saved by the live voting app, independent of analytics consent.
How to read it
Use this as the official vote ledger alongside the warehouse model.
Successful judge sign-insderivedOpen
Successful judge authentication events in the modeled data.
How to read it
Use this to understand modeled access completion volume on the event day.
Sign-in failuresderivedOpen
Modeled authentication failures in the same event-day window.
How to read it
Use this as a friction signal when sign-in appears to be underperforming.
Vote modal opensderivedOpen
Modeled vote-modal views for the event day.
How to read it
This measures demand for the scoring surface before submits are considered.
Total scorederivedOpen
Summed score mass accumulated by an entry or the full event in the modeled dataset.
How to read it
Use this to compare ranking strength across entries on the event day.
Average vote scorederivedOpen
Average score value recorded for an entry in the modeled dataset.
How to read it
Use this to compare score quality independently from vote volume.
Vote conversion ratederivedOpen
Submitted votes divided by eligible dialog views for the entry.
How to read it
This is the modeled conversion rate for the vote surface.
App sectiondimension
Which part of the app emitted the event.
Typical values or units
scoreboard, vote_dialog, judge_auth, manager_controls, consent_banner
How to read it
Use it to tell whether activity came from browsing, judging, sign-in, or organiser controls.
Judging statedimension
The judging phase the app was in when the event fired.
Typical values or units
preparing, open, finalized
How to read it
Use it to separate setup activity, live judging, and post-results viewing.
Viewer roledimension
The kind of visitor the app believed it was serving when the event fired.
Typical values or units
public, judge, manager
How to read it
Use it to separate public traffic from judges and the organiser account.
Entry slugdimension
Stable identifier for a hackathon project.
Typical values or units
north-star, signalforge, civic-mesh
How to read it
Use it to join related rows for the same project even when labels vary slightly.
Entry namedimension
Human-readable project title from the workbook.
Typical values or units
North Star, SignalForge, CivicMesh
How to read it
Use it for labels that non-technical readers will recognise immediately.
More schema fields (11)
Upload methoddimension
How the organiser chose the workbook file.
Typical values or units
drag_drop, file_picker
How to read it
Use it to see whether people relied on drag-and-drop or the file picker.
Workbook file typedimension
The file type submitted by the organiser.
Typical values or units
xlsx
How to read it
Use it to confirm uploads are coming from the expected workbook format.
Viewer eligible to votedimension
Whether the signed-in viewer was allowed to vote on the project tied to the event.
Typical values or units
true, false
How to read it
Use it to separate real scoring opportunities from blocked states.
Viewer already voteddimension
Whether the viewer had already submitted their locked score for the project in focus.
Typical values or units
true, false
How to read it
Use it to distinguish fresh vote opportunities from already-finished judging.
Entry open for votingdimension
Whether a specific project was open to new votes when the event fired.
Typical values or units
true, false
How to read it
Use it to explain paused judging, blocked attempts, and organiser intervention.
Consent sourcedimension
Which UI route or control produced the consent change.
Typical values or units
default, banner_accept, banner_decline, preferences
How to read it
Use it to understand where people actually made or changed their consent choice.
Upload issue countmetric
Validation issues found in a workbook upload attempt.
Typical values or units
count
How to read it
Use it to spot workbook quality problems quickly.
Imported project countmetric
The number of projects accepted from a workbook upload.
Typical values or units
count
How to read it
Use it to confirm import success and compare clean uploads with messy ones.
Vote count snapshotmetric
How many votes were represented by the event or snapshot row.
Typical values or units
count
How to read it
Treat this as a snapshot field for charts, not as the final vote ledger.
Total scoremetric
The summed score recorded for a project.
Typical values or units
score points
How to read it
Use it for leaderboard, trend, and project-comparison charts.
Vote scoremetric
The single judge-selected score on a 0 to 10 scale.
Typical values or units
score points
How to read it
Use it for score distributions, averages, and outlier analysis.
What this page includes
Fresh scope notes, data boundaries, and any proven caveats for this reporting surface.
What this page includes
Fresh scope notes, data boundaries, and any proven caveats for this reporting surface.
- Live mode is reading directly from the dedicated hackathon_reporting dataset in BigQuery, scoped to 27 March 2026.
- Warehouse reconciliation: 1080 rows are currently landed across 8 modeled tables, while the raw export dataset analytics_498363924 has 74 landed tables.
- This route never reads the main rajeevg.com page analytics tables, which avoids the mixed-data problem from the old Looker shell.
- Source of truth: the hackathon snapshot reports 9 recorded votes across 9 entries and 1 judges at https://vote.rajeevg.com/api/reporting/public-summary.
445
All analytics events returned by the warehouse model for the live event day.
9
Distinct users returned by the warehouse model for the same reporting window.
9
Source-of-truth votes from the live voting app snapshot.
0
Judge sign-ins observed in the warehouse model for the live event day.
55
Total score mass accumulated across all modeled votes in the reporting window.
Pulse
Daily volume
Modeled event-day activity from BigQuery, focused on the core judging story.
Daily momentum
Modeled users and recorded vote submissions over the event-day window.
Funnel
Judge access and vote flow
This section answers the main operational question: did judges get in cleanly, open voting, and complete submissions?
Voting funnel
From auth to submitted vote, using the dedicated voting funnel table rather than generic GA conversion events.
Auth mix
Passwordless and Google sign-ins split by method.
0
0
15
9
Entries
Entry performance
Project-by-project performance, combining leaderboard strength with how reliably views turned into votes.
Leaderboard by total score
Total score accumulated across the modeled event-day window.
Vote conversion by entry
Bubble size tracks recorded votes, the x-axis shows average vote score, and the y-axis shows how reliably an eligible modal view became a vote.
Current top entry readout
The leading project right now, with the exact metrics most likely to come up in a retrospective.
Entry
Trafficker on Steriods
10
10
0%
Taxonomy
Tracked event mix
Event vocabulary grouped by viewer role and judging state, so you can see where the modeled activity is concentrated.
Tracked event mix
Grouped by viewer role and competition status so you can quickly see whether activity is public, judge-led, or manager-led.