Hypertext Rails

Performance Metrics Dashboard System Documentation

Table of Contents

  1. Overview
  2. System Architecture
  3. Data Flow
  4. Database Integration
  5. Metrics Types
  6. API Documentation
  7. Frontend Implementation
  8. Tooltip Enhancement
  9. Time Range Management
  10. Incident Management
  11. Performance & Scalability
  12. Security & Authentication
  13. Deployment & Configuration
  14. Troubleshooting

Overview

The Performance Metrics Dashboard is a comprehensive analytics system designed to track and visualize key performance indicators (KPIs) for projects. The system integrates with external analytics databases to provide real-time insights into morale, action closure rates, and safety incidents.

Key Features

  • Real-time data visualization with interactive charts
  • Multi-time range analysis (30, 60, 90 days)
  • Enhanced tooltips with detailed mood breakdowns
  • Incident tracking and management with CSV import/export
  • Week offset analysis for trend comparison
  • Dynamic metric selection with real-time chart updates
  • Database integration with external analytics systems
  • Token-based authentication for security
  • Responsive design with modern UI/UX
  • Executive reporting with pyramid-style summaries

Supported Metrics

  1. Morale Score - Daily average score (70-100 range)
  2. Negative Morale - Inverse morale tracking (0-30 range)
  3. Action Closure Rate - Corrective actions completion percentage
  4. Incidents - Safety incident tracking with detailed categorization

System Architecture

┌─────────────┐    ┌──────────────┐    ┌─────────────┐
│   Analytics │    │  Performance │    │   Frontend  │
│   Database  │    │   Metrics    │    │   Dashboard │
├─────────────┤    ├──────────────┤    ├─────────────┤
│ morale_data │◄───┤ Controller   │───▶│ Chart.js    │
│ checkin_    │    │              │    │ Visualiza-  │
│ submissions │    │ Data         │    │ tion        │
│ incidents   │    │ Processing   │    │             │
└─────────────┘    └──────────────┘    └─────────────┘
       │                    │                    │
       │                    │                    │
       ▼                    ▼                    ▼
   Raw Data            Aggregated          Interactive
   Storage             Metrics             Charts

Component Overview

Backend Components: - Controller: PerformanceMetricsDashboardController - Main request handler - Models: Project, DbCredential, Incident, ProjectIncident - Database: External analytics database integration - Authentication: Token-based security system

Frontend Components: - JavaScript: PerformanceMetricsDashboard class with Chart.js integration - Views: ERB templates with responsive design - Charts: Interactive line charts with enhanced tooltips - UI Controls: Metric selection, time range switching, incident management

Data Flow

1. Request Processing Flow

Main Dashboard Request: User Request → Controller → Database Query → Data Processing → JSON Response → Frontend Rendering

Step-by-Step Process:

  1. Request Validation

    • Extract project_id, time_range, metrics parameters
    • Validate project has analytics database credentials
    • Set default values for missing parameters
  2. Database Connection

    • Load DbCredential for selected project
    • Establish connection to external analytics database
    • Validate database server configuration
  3. Data Loading (All Time Ranges)

    • Load data for 30, 60, and 90-day ranges simultaneously
    • Query morale_data table for morale metrics
    • Query checkin_submissions table for action closure rates
    • Load incident data from local database
  4. Data Processing

    • Apply week offsets for trend analysis
    • Filter data by date ranges
    • Calculate percentages and aggregations
    • Format data for frontend consumption
  5. Response Generation

    • Generate JSON data structure with all time ranges
    • Include detailed mood breakdown data for tooltips
    • Return structured response to frontend

2. Real-time Chart Updates

Metric Toggle Process: User Interaction → JavaScript Event → Chart Update → Legend Update → Visual Refresh

Time Range Switch Process: User Selection → Data Switch → Chart Re-render → Button State Update

Database Integration

Analytics Database Schema

Morale Data Table: sql CREATE TABLE morale_data ( id INTEGER PRIMARY KEY, project_id INTEGER, total_submissions INTEGER, average_mood FLOAT, energised_count INTEGER, relaxed_count INTEGER, indifferent_count INTEGER, uncertain_count INTEGER, anxious_count INTEGER, morale INTEGER, created_at TIMESTAMP );

Checkin Submissions Table: sql CREATE TABLE checkin_submissions ( id INTEGER PRIMARY KEY, project_name TEXT, kiosk_name TEXT, grouping_name TEXT, score INTEGER, score_feeling TEXT, created_at TIMESTAMP );

Local Database Schema (Main Application)

The system also uses local database tables for incident tracking and project associations:

Incidents Table: ```sql CREATE TABLE incidents ( id INTEGER PRIMARY KEY, incidentdate DATE NOT NULL, incidenttype VARCHAR NOT NULL, description TEXT, count INTEGER DEFAULT 1, location VARCHAR, notes TEXT, createdat TIMESTAMP NOT NULL, updatedat TIMESTAMP NOT NULL );

CREATE INDEX indexincidentsonincidentdate ON incidents(incidentdate); CREATE INDEX indexincidentsonincidentdateandincidenttype ON incidents(incidentdate, incidenttype); ```

Project Incidents Table (Join Table): ```sql CREATE TABLE projectincidents ( id INTEGER PRIMARY KEY, projectid INTEGER NOT NULL, incidentid INTEGER NOT NULL, createdat TIMESTAMP NOT NULL, updatedat TIMESTAMP NOT NULL, FOREIGN KEY (incidentid) REFERENCES incidents(id) );

CREATE UNIQUE INDEX indexprojectincidentsonprojectidandincidentid ON projectincidents(projectid, incident_id); ```

Incident Data Flow: 1. CSV Upload: User uploads CSV with incident data 2. Bulk Insert: Incidents created via Incident.insert_all! 3. Association: ProjectIncidents created to link incidents to projects 4. Data Source Selection: Controller checks project.has_incidents? to decide between real or sample data

Database Connection Management

Connection Strategy: ruby def create_analytics_connection db_server = @db_credential.db_server @analytics_connection = Sequel.connect( adapter: 'postgres', host: db_server.host, user: db_server.username, password: db_server.password, database: @db_credential.db_name ) end

Connection Lifecycle: 1. Establish: Create connection at request start 2. Reuse: Use single connection for all queries in request 3. Cleanup: Disconnect after all data is loaded 4. Error Handling: Graceful fallback to sample data

Metrics Types

1. Morale Score (70-100 Range)

Purpose: Track positive morale indicators Data Source: morale_data table Calculation: Filter records with morale >= 70 and <= 100 Display: Green line chart with percentage scale

Enhanced Tooltip Data: - Morale percentage - Total submissions count - Mood breakdown (anxious, uncertain, indifferent, relaxed, energised)

2. Negative Morale (0-30 Range)

Purpose: Track negative morale indicators Data Source: morale_data table Calculation: Filter records with morale >= 0 and <= 30 Display: Red line chart with percentage scale

Enhanced Tooltip Data: - Negative morale percentage - Total submissions count - Detailed mood breakdown

3. Action Closure Rate

Purpose: Track completion of corrective actions Data Source: checkin_submissions table Calculation: Average score converted to percentage (1-5 scale → 0-100%) Display: Blue line chart with percentage scale

Business Logic: ruby avg_score = daily_submissions.sum { |s| s[:score] || 0 } / total_submissions.to_f closure_rate = (avg_score / 5.0 * 100).round(2)

4. Incidents

Purpose: Track safety incidents and near-misses Data Source: Local incidents table or CSV import Types: Dynamic - accepts any incident type from CSV uploads Display: Scatter plot with incident markers

How Incident Types Work: - Dynamic Type Support: The system accepts any incident type via CSV import - Type Extraction: Unique incident types are automatically extracted from uploaded data - Common Types: Near Miss, Minor, Lost Time Incident, Fatality, Safety Violation, Equipment Failure, Environmental - Flexible Categorization: Projects can define their own incident categories

Default Incident Categories (Examples): - Near Miss: Potential incidents that didn't result in injury - Minor: Small injuries requiring first aid - Lost Time Incident: Injuries requiring medical treatment and time off - Fatality: Death-related incidents - Safety Violation: Policy or regulation violations - Equipment Failure: Equipment or machinery malfunctions - Environmental: Environmental incidents (spills, emissions, etc.)

API Documentation

Dashboard Endpoints

Project Selection (No project_id provided): GET /performance_metrics/ GET /performance_metrics/dashboard Shows a project selection page listing all projects with complete analytics configuration. Projects must have: - DbCredential record - Associated AnalyticsDbServer for current Rails environment - Valid database connection

Main Dashboard: GET /performance_metrics/dashboard?project_id=123&time_range=30&metrics=morale_score,negative_morale

Parameters: - project_id (required): Project identifier - time_range (optional): 30, 60, or 90 days (default: 30) - metrics (optional): Comma-separated metric list (default: all metrics enabled) - week_offsets (optional): JSON object with week offsets for historical comparison - master_key (optional): Authentication token - Currently disabled for testing

Settings Page: GET /performance_metrics/dashboard/settings?project_id=123&tab=preferences

Incident Management: POST /performance_metrics/dashboard/upload_incidents GET /performance_metrics/dashboard/download_sample_csv

Response Format

Success Response: json { "status": "success", "data": { "timeRanges": { "30": { "labels": ["Jan 01", "Jan 02", ...], "moraleScore": [85, 87, 89, ...], "negativeMorale": [12, 10, 8, ...], "actionClosureRate": [75, 78, 82, ...], "incidents": [...], "moraleScoreDetails": [ { "date": "Jan 01", "value": 85, "total_submissions": 99, "anxious_count": 16, "uncertain_count": 15, "indifferent_count": 15, "relaxed_count": 23, "energised_count": 30 } ] } } } }

Frontend Implementation

JavaScript Architecture

Main Class: PerformanceMetricsDashboard

class PerformanceMetricsDashboard {
  constructor(data) {
    this.chart = null;
    this.allData = data || window.performanceMetricsData;
    this.weekOffsets = this.allData.weekOffsets || {};
    this.currentTimeRange = this.allData.currentTimeRange || '30';
    this.initializeData();
    this.initializeChart();
  }
}

Key Methods: - initializeChart() - Chart.js setup and configuration - prepareChartData() - Data transformation for chart display - updateChart() - Real-time chart updates - switchTimeRange() - Time range switching - toggleMetric() - Metric visibility control

HTMX Integration

The dashboard supports HTMX for dynamic content updates without full page reloads:

Controller Support: ```ruby

Handle HTMX requests

if request.headers['HX-Request'] render partial: 'performancemetrics/performancemetrics_dashboard/content' end ```

Benefits: - Partial Page Updates: Only dashboard content refreshes, not entire page - Faster Navigation: Reduced bandwidth and rendering time - Progressive Enhancement: Falls back to full page load if HTMX unavailable - Seamless UX: Smooth transitions between views

Use Cases: - Project switching without page reload - Time range updates - Settings panel interactions - Dynamic form submissions

Chart Configuration

Chart.js Setup: javascript this.chart = new Chart(ctx, { type: 'line', data: chartData, options: this.getChartOptions() });

Chart Features: - Responsive design with aspect ratio maintenance - Dual Y-axes for different metric scales - Time-based X-axis with date formatting - Interactive tooltips with detailed information - Smooth animations with easing functions - Legend management with dynamic updates

Data Structure

Frontend Data Format: javascript window.performanceMetricsData = { currentTimeRange: '30', weekOffsets: { 'morale_score': 0, 'negative_morale': 0 }, incidentTypes: [...], timeRanges: { '30': { labels: ['Jan 01', 'Jan 02', ...], moraleScore: [85, 87, ...], negativeMorale: [12, 10, ...], actionClosureRate: [75, 78, ...], incidents: [...], moraleScoreDetails: [...], negativeMoraleDetails: [...] } } };

Tooltip Enhancement

Enhanced Tooltip System

The system now provides detailed tooltips that match the exact format shown in your requirements, displaying comprehensive mood breakdown data.

Tooltip Content for Morale Metrics: ``` Negative Morale: 8.0% Total submissions: 99

Mood breakdown: Anxious: 16 Uncertain: 15 Indifferent: 15 Relaxed: 23 Energised: 30 ```

Implementation: ```javascript tooltip: { callbacks: { label: function(context) { const datasetLabel = context.dataset.label; const value = context.parsed.y;

  if (datasetLabel.includes('Morale')) {
    const dataIndex = context.dataIndex;
    const dashboard = window.performanceMetricsDashboard;
    const currentData = dashboard.getCurrentData();

    let detailedData = null;
    if (datasetLabel.includes('Negative')) {
      detailedData = currentData.negativeMoraleDetails[dataIndex];
    } else {
      detailedData = currentData.moraleScoreDetails[dataIndex];
    }

    if (detailedData && detailedData.total_submissions) {
      return [
        `${datasetLabel}: ${value}%`,
        `Total submissions: ${detailedData.total_submissions}`,
        '',
        'Mood breakdown:',
        `Anxious: ${detailedData.anxious_count}`,
        `Uncertain: ${detailedData.uncertain_count}`,
        `Indifferent: ${detailedData.indifferent_count}`,
        `Relaxed: ${detailedData.relaxed_count}`,
        `Energised: ${detailedData.energised_count}`
      ];
    }
  }

  return `${datasetLabel}: ${value}`;
}

} } ```

Data Enhancement

Controller Data Structure: ruby chart_data = positive_morale_records.map do |record| { date: record[:created_at].strftime('%b %d'), value: record[:morale], total_submissions: record[:total_submissions] || 0, anxious_count: record[:anxious_count] || 0, uncertain_count: record[:uncertain_count] || 0, indifferent_count: record[:indifferent_count] || 0, relaxed_count: record[:relaxed_count] || 0, energised_count: record[:energised_count] || 0, average_mood: record[:average_mood] || 0 } end

Time Range Management

Multi-Time Range Architecture

The system pre-loads data for all time ranges (30, 60, 90 days) to enable instant switching without additional server requests.

Data Loading Strategy: ```ruby def loadalltimerangesdata ['30', '60', '90'].each do |timerange| daterange = generatedaterange(timerange.toi) connectedmetrics = loadconnectedmetricsfortimerange(daterange, timerange) nonconnectedmetrics = loadnonconnectedmetricsfortimerange(daterange, timerange)

@all_time_ranges_data[time_range] = {
  date_range: date_range,
  connected_metrics: connected_metrics,
  non_connected_metrics: non_connected_metrics
}

end end ```

Frontend Switching: javascript switchTimeRange(timeRange) { this.currentTimeRange = timeRange; this.updateTimeRangeButtons(timeRange); this.updateChart(); }

Week Offset Analysis

Purpose: Compare current performance with historical periods Implementation: Data shifting by specified week intervals Use Cases: - Compare current week with same week last month - Analyze seasonal trends - Track improvement over time

Data Shifting Logic: javascript applyWeekOffsetToData(data, weeks) { if (weeks === 0) return data; const offsetDays = weeks * 7; const shiftedData = [...data]; for (let i = 0; i < offsetDays && i < shiftedData.length; i++) { shiftedData.unshift(null); } return shiftedData.slice(0, data.length); }

Incident Management

Incident Data Sources

1. Real Incident Data (Production) - Stored in local incidents table - Linked to projects via project_incidents table - Supports multiple incident types and descriptions

2. Sample Incident Data (Development) - Default sample data for projects without real incidents - Predefined incident types and scenarios - Used for testing and demonstration

CSV Import/Export System

CSV Format Specification: csv incident_date,incident_type,description,count,location,notes 2025-01-15,Near Miss,Slippery surface,1,Worksite A,Cleaned immediately 2025-01-16,Minor,Minor cut,1,Office,First aid applied 2025-01-17,Safety Violation,Hard hat not worn,1,Construction Site,Verbal warning issued 2025-01-18,Equipment Failure,Crane malfunction,1,Machine Shop,Equipment repaired

Field Specifications: - incident_date (required): Date in YYYY-MM-DD format - incident_type (required): Any string - system accepts custom types - description (optional): Text description of the incident - count (optional): Number of incidents (defaults to 1) - location (optional): Where the incident occurred - notes (optional): Additional notes or follow-up actions

Import Process: 1. Parse CSV: Read and validate CSV file structure 2. Row Validation: - Check required fields (incidentdate, incidenttype) - Parse and validate date format - Ensure count is numeric and > 0 - Collect validation errors by row number 3. Cleanup Old Data: - Delete project's existing ProjectIncident associations - Delete orphaned Incident records (not linked to other projects) 4. Bulk Insert: - Create all new Incident records via Incident.insert_all! - Create ProjectIncident associations via ProjectIncident.insert_all! 5. Response: Return count of successfully imported incidents

Import Error Handling: - Validation errors collected per row (e.g., "Row 5: Invalid date format") - Partial imports not allowed - all rows must be valid - Detailed error messages for troubleshooting

Export Process: 1. Sample Generation: Creates template CSV with 8 example incidents 2. Example Data: Includes various incident types (Near Miss, Minor, Lost Time, Equipment Failure, Safety Violation, Environmental) 3. Download Endpoint: GET /performance_metrics/dashboard/download_sample_csv 4. File Format: UTF-8 encoded CSV with proper headers

Dynamic Incident Types

Type Extraction: ```ruby def extractuniqueincidenttypes(incidentsdata) incidenttypes = {} incidentsdata.each do |date, incident| type = incident[:type] note = incident[:note]

if note.present?
  key = "#{type} - #{note}"
  display_name = note
else
  key = type
  display_name = type
end

incident_types[key] = {
  type: type,
  note: note,
  display_name: display_name,
  key: key.parameterize.underscore
}

end

incident_types.values end ```

Performance & Scalability

Database Optimization

Connection Management: - Single connection per request - Connection pooling for multiple requests - Automatic cleanup after data loading - Error handling with graceful fallbacks

Query Optimization: ```ruby

Bulk data loading for all time ranges

def loadalltimerangesdata loaddatabasecredentials createanalyticsconnection

['30', '60', '90'].each do |time_range| # Process all time ranges with single connection end

cleanupanalyticsconnection end ```

Single Connection Strategy: - One Sequel connection created per request - Connection reused for all time ranges (30, 60, 90 days) - Automatic cleanup after data loading completes - Prevents connection pool exhaustion - Reduces connection overhead by ~66%

Data Processing: - Pre-computed aggregations - Efficient date range filtering - Minimal memory allocation - Bulk database operations

Frontend Performance

Chart Optimization: - Chart.js with optimized rendering - Efficient data updates without full re-render - Smooth animations with hardware acceleration - Responsive design with proper aspect ratios

Memory Management: - Single chart instance with updates - Efficient data structure reuse - Proper cleanup on page navigation - Minimal DOM manipulation

Scalability Metrics

Current Capabilities: - Projects: Supports unlimited projects with analytics databases - Time Ranges: Efficient handling of 30-90 day ranges - Metrics: Extensible metric system - Concurrent Users: Stateless design supports multiple users

Performance Characteristics: - Page Load: < 2 seconds for full dashboard - Chart Updates: < 100ms for metric toggles - Time Range Switch: < 50ms (pre-loaded data) - Database Queries: Optimized bulk operations

Security & Authentication

Authentication System

Current Status: DISABLED FOR TESTING

The token-based authentication is currently disabled to facilitate development and testing:

# before_action :authenticate_with_master_key  # Temporarily disabled for testing

When Enabled (Production Configuration): - Required Parameter: master_key must match ENV['IFRAME_MASTER_KEY'] - Environment Variable: Set IFRAME_MASTER_KEY in production environment - Validation: Secure token comparison before allowing access - Helper Method: performance_metrics_dashboard_path_with_key auto-includes master key

Security Recommendations: - Enable authentication before production deployment - Use strong, randomly generated master key - Rotate keys periodically - Monitor access logs for unauthorized attempts

Data Security

Database Access: - Encrypted database connections - Credential-based authentication - Project-specific data isolation - Secure parameter handling

Input Validation: ruby def performance_metrics_params params.permit(:time_range, :master_key, :metrics, metric_series: {}) end

CSV Import Security: - File type validation - Size limit enforcement - Content sanitization - Error handling for malicious files

Deployment & Configuration

Environment Setup

Required Environment Variables: bash IFRAME_MASTER_KEY=your-secure-master-key DATABASE_URL=your-database-connection-string

Database Configuration: - Analytics database server setup - Project credential configuration - Connection pool settings - Backup and recovery procedures

Project Configuration

Analytics Database Setup: 1. Create Database Credentials: ruby DbCredential.create!( project_id: project_id, db_name: 'analytics_db', # ... other credentials )

  1. Configure Database Server: ruby AnalyticsDbServer.create!( host: 'analytics-server.com', username: 'analytics_user', password: 'secure_password', environment: Rails.env )

  2. Link Project to Server: ruby ProjectAnalyticsDbServer.create!( project_id: project_id, analytics_db_server_id: server_id )

Deployment Checklist

Pre-Deployment: - [ ] Environment variables configured (IFRAME_MASTER_KEY, DATABASE_URL) - [ ] Database credentials set up (DbCredential + AnalyticsDbServer) - [ ] Analytics database accessible from application servers - [ ] Master key configured and authentication enabled in controller - [ ] SSL certificates installed for secure connections - [ ] Project-to-server associations created (ProjectAnalyticsDbServer) - [ ] Incident tables migrated (incidents, project_incidents)

Post-Deployment Verification: - [ ] Project selection page displays available projects - [ ] Dashboard loads correctly with real data - [ ] Charts render properly with all time ranges (30/60/90 days) - [ ] Data displays accurately (morale, action closure, incidents) - [ ] Tooltips show detailed mood breakdown information - [ ] Incident CSV import/export works correctly - [ ] Authentication functions properly (if enabled) - [ ] HTMX partial updates work smoothly - [ ] Time range switching is instantaneous - [ ] Week offset comparison works correctly

Post-Deployment: - [ ] Dashboard loads correctly - [ ] Charts render properly - [ ] Data displays accurately - [ ] Tooltips show detailed information - [ ] Incident import/export works - [ ] Authentication functions properly

Troubleshooting

Common Issues

1. Dashboard Not Loading

Problem: Dashboard shows loading spinner indefinitely Solutions: - Check database connectivity - Verify project has analytics credentials - Check browser console for JavaScript errors - Validate master key configuration

2. Charts Not Displaying

Problem: Charts appear blank or show "No data" message Solutions: - Verify analytics database has data - Check date range parameters - Validate metric selection - Review database query logs

3. Tooltips Not Showing Details

Problem: Tooltips show basic information without mood breakdown Solutions: - Check if morale data includes detailed fields - Verify data structure in browser console - Ensure JavaScript errors aren't blocking execution - Validate database schema has required columns

4. Performance Issues

Problem: Slow page loads or chart updates Solutions: - Check database connection pool - Review query execution times - Monitor memory usage - Optimize database indexes

5. Authentication Errors

Problem: "Access denied" or authentication failures Solutions: - Verify master key configuration - Check environment variables - Validate token format - Review authentication middleware

Debugging Commands

Rails Console Debugging: ```ruby

Check project analytics setup

project = Project.find(123) dbcredential = DbCredential.findby(projectid: project.id) puts "Has credentials: #{dbcredential.present?}" puts "Has server: #{dbcredential&.dbserver.present?}"

Test database connection

dbcredential.withanalyticsconnection do |conn| puts "Connection successful" puts "Tables: #{conn.tables}" puts "Morale data count: #{conn[:moraledata].count}" end

Check incident data

project = Project.find(123) puts "Has incidents: #{project.has_incidents?}" puts "Incident count: #{project.incidents.count}" ```

Browser Console Debugging: ```javascript // Check data structure console.log('Performance Metrics Data:', window.performanceMetricsData);

// Check chart instance console.log('Chart:', window.performanceMetricsDashboard?.chart);

// Test tooltip data const currentData = window.performanceMetricsDashboard?.getCurrentData(); console.log('Current data:', currentData); console.log('Morale details:', currentData?.moraleScoreDetails); ```

Performance Monitoring

Key Metrics to Monitor: - Page load times - Chart rendering performance - Database query execution times - Memory usage patterns - Error rates and types

Monitoring Tools: - Browser developer tools - Rails logs - Database query logs - Application performance monitoring (APM) - Error tracking services


Last Updated: October 8, 2025 Version: 1.1 - Documentation Update Changes in v1.1: - Added local incident database schema documentation - Updated incident type handling to reflect dynamic support - Clarified authentication status (currently disabled) - Added project selection page documentation - Enhanced CSV import/export specification - Added HTMX integration documentation - Improved deployment checklist with security verification - Updated connection optimization details

Maintainer: Tonic Labs Ltd Development Team Documentation: Available at https://peptalk.toniclabs.ltd/docs?file=PERFORMANCE_METRICS_SYSTEM.md