Hackathon reporting surface
Hackathon reporting dashboard
The shell stays fixed while you switch between BigQuery modeling and direct GA4 property reporting, so it is easier to compare the same hackathon story without reorienting yourself every time.
Reporting source
Generated 25 Mar 2026, 20:03 UTC
BigQuery modeled tables are still empty, so this live route is temporarily rendered from the shared GA4 property over last 30 days including today.
Event count
1,716
Total hackathon analytics events returned in the current reporting window.
Users
43
Distinct users observed on the hackathon reporting surface in the same window.
Persisted votes
297
Authoritative vote rows from the live voting app snapshot that powers the public scoreboard.
Tracked submits
20
GA4 vote_submitted events captured as analytics telemetry for the same window.
GA4 coverage
6.7%
Tracked submits divided by the authoritative persisted vote total.
Manager actions
22
Uploads, round controls, and entry state operations recorded for the manager.
Source reconciliation
Fresh reporting boundaries and discrepancies for this surface.
- BigQuery modeled tables are still empty, so this live route is temporarily rendered from the shared GA4 property over last 30 days including today.
- Warehouse reconciliation: 0 rows are landed across 8 modeled tables, and the raw export dataset ga4_498363924 currently has 0 landed tables.
- This keeps the analytics route truthful and populated while the BigQuery-side pipeline catches up.
- Source of truth: the live voting app currently reports 297 persisted votes across 9 entries and 37 judges at https://vote.rajeevg.com/api/reporting/public-summary.
- Tracked analytics coverage: GA4 currently shows 20 vote_submitted events, a gap of 277 versus the persisted vote total (6.7% coverage).
- The gap is expected because vote_submitted is client-side telemetry behind analytics consent, while the persisted vote total comes from the live competition snapshot that powers the public scoreboard.
Pulse
Round pulse and volume
This combines the scoreboard-sized story with the traffic-shaped story, so you can tell whether usage, votes, and manager interventions rose together.
Daily momentum
Unique users, submitted votes, and manager actions over the reporting window.
Latest round state
The freshest denominator snapshot from the dedicated round snapshot table.
Status
finalized
Entries
4
Open entries
0
Judges in denominator
0
Remaining votes
0
This is the on-the-day manager number to trust before closing individual entries or finalizing the round.
Funnel
Voting funnel and judge access
This section answers the first real operational question people ask: did judges get in cleanly, open the modal, and actually finish their votes?
Voting funnel
From auth to submitted vote, using the dedicated voting funnel table rather than generic GA conversion events.
Auth mix
Passwordless and Google auth completions split by method, ready for when live rows start landing.
Auth completions
19
Auth failures
0
Dialog views
48
Tracked submits
20
Entries
Entry analysis
Project-by-project performance needs both ranking and friction context, so this section pairs the scoreboard story with conversion quality.
Leaderboard by aggregate score
Summed score mass across the reporting window, taken from entry performance rather than the public scoreboard UI.
Conversion quality by entry
Bubble size tracks submitted votes, the x-axis shows average score, and the y-axis shows how reliably an eligible modal view became a vote.
Current top entry readout
The leading project right now, with the exact metrics most likely to come up in a retrospective.
Entry
Taxo guard
Aggregate score
14
Average score
3.5
View-to-vote rate
0%
Manager
Manager operations
This is the operational trust layer: upload behavior, entry open-close activity, and the few actions that can change the state of the event.
Round-control activity
Uploads, entry state changes, round starts, and finalizations across the reporting window.
Operations digest
The fastest way to answer “did the control surface behave the way we expected?”
Projects imported
8
Entries opened
0
Entries closed
0
Finalizations
1
Resets
2
Workbook issues
0
Experience
Experience, devices, and board behavior
The reporting shell should answer not just whether votes happened, but how the interface behaved across device classes, themes, and table-versus-chart board usage.
Engagement heatmap
Average engaged seconds by viewport category and preferred board view.
Board-view behavior
Whether people stayed in table mode or deliberately explored the chart renderer.
table
108 users
Table switches: 33
Chart switches: 0
Taxonomy
Event taxonomy and promoted schema
This is the operator-facing reference layer: what the event vocabulary looks like, how it groups by role and round state, and what each promoted dimension or metric actually means.
Event taxonomy
Grouped by viewer role and competition status so you can see whether the event vocabulary is balanced or manager-heavy.
Event source
dimensionWhich surface in the app emitted the event.
Typical values or units
scoreboard, vote_dialog, judge_auth, manager_controls, consent_banner
How to read it
Use it to separate passive viewing from judging and manager operations.
Competition status
dimensionThe judging lifecycle state at the time of the event.
Typical values or units
preparing, open, finalized
How to read it
Use it to split pre-round activity, live judging behavior, and finalized viewing.
Viewer role
dimensionWhat kind of user the app considers the visitor in the current moment.
Typical values or units
public, judge, manager
How to read it
Use it to distinguish observer traffic from judges and the single manager.
Entry slug
dimensionStable identifier for a hackathon project.
Typical values or units
north-star, signalforge, civic-mesh
How to read it
Use it for joins, chart grouping, and project-level drill-downs.
Entry name
dimensionHuman-readable project title from the workbook.
Typical values or units
North Star, SignalForge, CivicMesh
How to read it
Use it for report labels and stakeholder-facing visualizations.
Upload method
dimensionHow the workbook import was initiated.
Typical values or units
drag_drop, file_picker
How to read it
Use it to see which import affordance the manager actually relied on.
Workbook extension
dimensionFile extension submitted by the manager.
Typical values or units
xlsx
How to read it
Use it to verify that imports are coming from the intended template format.
Viewer can vote
dimensionWhether the signed-in viewer was eligible to score the entry tied to the event.
Typical values or units
true, false
How to read it
Use it to separate legitimate scoring opportunities from blocked states.
Viewer has vote
dimensionWhether the viewer had already submitted a score for the entry in focus.
Typical values or units
true, false
How to read it
Use it to distinguish fresh voting opportunities from already-completed scoring.
Entry voting open
dimensionWhether a specific entry is currently open for votes.
Typical values or units
true, false
How to read it
Use it to explain pauses, blocked attempts, and manager intervention patterns.
Consent source
dimensionWhich UI control changed the analytics consent state.
Typical values or units
default, banner_accept, banner_decline, preferences
How to read it
Use it to understand where visitors actually made their consent choice.
Entry count
metricNumber of projects loaded into the scoreboard snapshot.
Typical values or units
count
How to read it
Use it to validate import completeness and board scale.
Open entry count
metricNumber of projects currently open for judging.
Typical values or units
count
How to read it
Use it to see how much of the field is live at any moment.
Participating judge count
metricNumber of judges who have started scoring and are in the round denominator.
Typical values or units
count
How to read it
Use it for judging participation rather than just total sign-ins.
Total remaining votes
metricOutstanding vote obligations across open entries for participating judges.
Typical values or units
count
How to read it
Use it as the manager’s core readiness signal before closing an entry or finalizing.
Issue count
metricValidation issues found in a workbook upload attempt.
Typical values or units
count
How to read it
Use it to spot workbook hygiene problems quickly.
Imported project count
metricNumber of projects accepted from an upload.
Typical values or units
count
How to read it
Use it to confirm import success and compare against issue-heavy attempts.
Vote count
metricHow many votes are represented by the event or snapshot.
Typical values or units
count
How to read it
Use it for throughput and judging accumulation charts.
Aggregate score
metricSummed scoreboard score for a project.
Typical values or units
score points
How to read it
Use it for leaderboard, trend, and project comparison visuals.
Score
metricThe single judge-selected score on a 0–10 scale.
Typical values or units
score points
How to read it
Use it for distribution, variance, and outlier analysis.