A comprehensive web-based platform that connects blind and low-vision users with sighted volunteers and AI assistance for real-time visual support. Built with Next.js 15, this accessible application enables video calls between users and volunteers, plus AI-powered image descriptions with advanced sound notifications.
- Two User Roles: VI Users (visually impaired) and Volunteers with role-based dashboards
- Real-time Video Calls: WebRTC-powered two-way video and audio communication
- AI Assistant: Google Gemini-powered image analysis with text-to-speech capabilities
- Smart Sound System: Audio notifications for call events (incoming, outgoing, connected, ended)
- Volunteer Matching: Real-time broadcast system for connecting available volunteers
- Session Management: Persistent call sessions with reconnection support
- User Statistics: Call tracking, duration monitoring, and volunteer activity metrics
- Multi-language Support: Interface and audio descriptions in 10+ languages
- Full Accessibility: WCAG 2.1 AA compliance with screen reader support
- Secure Authentication: NextAuth.js with role-based access control
The main landing page where users choose their path - either seeking visual assistance or volunteering to help others. This accessible interface clearly presents both options with descriptive text.
Role-based signup pages tailored for each user type, ensuring the registration process is optimized for the specific needs of VI users and volunteers.
Accessible signin pages designed for each user role with clear navigation and form validation.
The dashboard for visually impaired users provides quick access to essential features including starting video calls with volunteers and using the AI assistant for image analysis.
Volunteers can toggle their availability, view incoming call requests, and track their volunteer statistics through this dedicated interface.
A comprehensive profile management interface where users can update their preferences, change passwords, and manage account settings with full accessibility support.
The real-time video call interface connecting VI users with volunteers, featuring accessible controls and clear visual indicators for call status.
- Node.js 18+: Latest LTS version recommended
- MongoDB: Local instance or MongoDB Atlas cloud database
- Google Gemini API Key: For AI image analysis functionality
- Modern Browser: Chrome, Firefox, Safari, or Edge with WebRTC support
-
Clone the repository
git clone <repository-url> cd aechan-huend-gaash
-
Install dependencies
npm install
-
Environment Configuration Create
.env.localwith your configuration:# Database MONGODB_URI=mongodb://localhost:27017/aechan-huend-gaash # Authentication NEXTAUTH_URL=http://localhost:3000 NEXTAUTH_SECRET=your-secure-secret-key-minimum-32-characters # AI Services GOOGLE_API_KEY=your-google-gemini-api-key
-
Start the development server
npm run dev
-
Access the application Open http://localhost:3000 in your browser
npm run dev- Start development server with hot reloadnpm run build- Build production applicationnpm start- Start production servernpm run lint- Run ESLint for code quality checks
- Account Setup: Click "I need visual assistance" β Create account and select language
- Dashboard Features: Call volunteers, use AI assistant, manage profile, view history
- During Calls: Enable camera to share view, use audio to describe needs
- Account Setup: Click "I would like to volunteer" β Create account and set availability
- Dashboard Features: Toggle availability, accept calls, view statistics
- During Calls: Provide clear descriptions, ask clarifying questions, be supportive
- Upload images or use camera capture
- Receive detailed text descriptions with audio playback
- Re-analyze images with different perspectives
- Frontend: Next.js 15, React 19, Tailwind CSS, Radix UI
- Backend: Next.js API Routes, Socket.IO, NextAuth.js v5
- Database: MongoDB with Mongoose ODM
- Real-time: WebRTC for video calls, Socket.IO for signaling
- AI: Google Gemini API for image analysis
- Audio: Web Audio API for sound notifications
aechan-huend-gaash/
βββ public/
β βββ sounds/ # Audio notification files
β β βββ incoming-call.mp3 # Volunteer incoming call alert
β β βββ outgoing-call.mp3 # VI user connection sound
β β βββ call-connected.mp3 # Success notification
β β βββ call-ended.mp3 # Call termination sound
β βββ [static files] # Icons, images, favicon
βββ src/
β βββ app/ # Next.js App Router pages
β β βββ api/ # API routes
β β β βββ ai/ # AI image analysis endpoints
β β β βββ auth/ # Authentication routes
β β β βββ user/ # User management APIs
β β β βββ volunteer/ # Volunteer-specific APIs
β β βββ auth/ # Authentication pages
β β βββ call/ # Video call interface
β β βββ dashboard/ # User dashboards
β β β βββ vi-user/ # VI User dashboard
β β β βββ volunteer/ # Volunteer dashboard
β β βββ ai-assistant/ # AI image analysis interface
β β βββ profile/ # User profile management
β β βββ [layout & global files]
β βββ components/ # Reusable UI components
β β βββ providers/ # Context providers
β β βββ ui/ # Base UI components
β βββ contexts/ # React contexts (if any)
β βββ hooks/ # Custom React hooks
β β βββ useSocket.js # Socket.IO connection hook
β βββ lib/ # Utility libraries
β β βββ auth.js # NextAuth configuration
β β βββ db.js # Database connection
β β βββ env.js # Environment validation
β β βββ utils.js # General utilities
β β βββ sounds.js # Audio management system
β βββ models/ # Database schemas
β βββ User.js # User data model
β βββ Call.js # Call session model
β βββ Session.js # Session management
βββ server.js # Custom server with Socket.IO
βββ next.config.mjs # Next.js configuration
βββ tailwind.config.js # Tailwind CSS configuration
βββ eslint.config.mjs # ESLint configuration
βββ [config files] # Package.json, etc.
POST /api/auth/register- User registrationPOST /api/auth/signin- User authenticationGET /api/auth/session- Session validation
GET /api/user/profile- Get user profilePUT /api/user/profile- Update user profilePOST /api/user/change-password- Change passwordGET /api/user/stats- Get user statistics
PUT /api/volunteer/availability- Toggle availability status
POST /api/ai/analyze- Analyze uploaded images
join- User joins with role and profile datastart_call- VI user requests assistanceaccept_call- Volunteer accepts call requestend_call- Either party ends the calljoinRoom- Join specific call roomoffer,answer,ice-candidate- WebRTC signaling
incoming_call- Notify volunteers of call requestscall_connected- Notify both parties of successful connectioncall_ended- Notify call terminationcall_taken- Notify call was accepted by another volunteeruser_reconnected- Notify of reconnection events
For detailed development setup, see the Getting Started section above.
- JavaScript: ES6+ with proper error handling
- CSS: Tailwind utilities with semantic class names
- Accessibility: WCAG 2.1 AA compliance required
- Performance: Optimize images, lazy load components
- Security: Validate all inputs, sanitize data
- Issues: Report bugs via GitHub Issues
- Documentation: Check inline code comments
- Support: Contact maintainers for critical issues
Built with β€οΈ for accessibility and inclusion by Naik Mubashir








