Open Now
BFL
Build for LAWL · 2026 Engineering Contest
Build for LAWL

Ship real code. Win an internship. Change legal access.

Four original engineering problems. Five days to build. The best submission in each domain gets an internship offer at LAWL — after a technical interview round.

Open to all ◆ 1 winner per domain → internship → Interview round before offer Certificate for domain winner
Contest closes in
00Days
00Hours
00Mins
00Secs
27 April 2026 1 May 2026 · Results: 3 May 2026
Loading...
Who We Are

LAWL is not
a law firm.

We are a legal technology company building infrastructure that makes the Indian legal system accessible to people who have never had access to it. We hire engineers, designers, and builders — not through referrals or HR funnels, but through work.

01
India has 1.4 billion people and a broken legal access problem. Fewer than 1.8 million advocates serve the entire country. Courts are distant, fees are opaque, and trust is zero. We are building the infrastructure layer — verified identity, real-time communication, and transparent billing — that makes getting legal help as simple as ordering a cab. That is the technical problem. It is genuinely hard. And it is genuinely worth solving.
02
We hire differently. No campus drives. No aptitude tests. No three-round HR loops. Build for LAWL is how we find engineers who think in systems, write code that holds up under real conditions, and make decisions they can defend in a room. Your submission is your application. If it's the best in your domain, you get a technical interview — and if you clear it, you join the team.
03
The problems in this contest are real. A webhook dispatcher, a live chat interface with offline-first storage, a data-driven profile page built without a UI kit — these are systems we will actually build. The best submission in each domain becomes a proof of capability, not a portfolio piece. What you build here may ship.
1.4B
People in our market
80%
Without legal access
0
HR rounds
5
Days to build
4
Internships available
The Challenge — Build for LAWL

Four domains.
Four hard problems.

Four original problems designed to be unsolvable by a coding agent alone. Each requires independent thinking, real tradeoffs, and decisions only a human engineer can justify. Your repository must be public. We read every line.

UX
UI / UX Designer
Design a Habit — Not a Feature
Design an original mobile onboarding flow for a high-anxiety, zero-brand-recognition product. Your brief, your concept, your screens — built from scratch.
BE
Backend Engineer
Fault-Tolerant Webhook Dispatcher
A production-grade event fan-out system with ordering guarantees, HMAC signing, retry logic, deduplication, and crash recovery — under concurrent load.
MN
MERN Engineer
Professional Profile — Live & Data-Driven
A public profile page pulling real external APIs, with skeleton loading, partial failure states, rate-limited booking form — no CSS framework, no UI kit.
FL
Flutter Engineer
Chat Interface with Real Constraints
Multi-turn chat in Flutter: offline-first SQLite storage, optimistic sends with rollback, custom bubble rendering, read receipts, FCM in all three app states — no chat SDK.

You are designing the first 5 minutes of a mobile app for a service that people desperately need but deeply distrust — think: a first visit to a doctor in a new city, filing a complaint with a government office, or consulting a financial advisor for the first time. You choose the exact context. It must be set in India. It must involve genuine anxiety, trust, and an unfamiliar process.

Your job: design the onboarding and first-action flow — from cold app open to the moment the user completes their first meaningful action. The flow must make someone want to come back tomorrow without a push notification. This is an original concept only. We will notice immediately if the information architecture, visual language, or interaction patterns are borrowed from any existing product.

What you must deliver
  • D1
    A written product brief — 200 words max. What is the product, who is the user, what is the anxiety, and what does success look like after 5 minutes? We evaluate this before we open Figma. A weak brief disqualifies strong screens.
  • D2
    Onboarding flow — Minimum 4 screens from cold open to account creation. No generic sign-up patterns as a crutch — design the actual experience for a first-generation smartphone user in India.
  • D3
    First meaningful action screen — The one screen where the user does the core thing the app exists for. This is your most important screen. It must carry 40% of your total effort.
  • D4
    One empty state and one error state — Fully designed, not wireframe placeholders. The error state must offer a real recovery path. The empty state must not feel like a dead end.
  • D5
    Decision log — One paragraph per screen. Why this layout. Why this copy. What you tried first and rejected. The log is weighted as heavily as the screens in evaluation.
What makes this hard
Problem framing
You define the brief
We are testing whether you can frame a design problem correctly — not just execute one someone else wrote. A weak brief produces weak screens regardless of execution quality.
Trust
No brand equity to borrow
The product is unknown. You cannot use "4.8 stars" or "10,000+ users" — design trust from first principles, using only what the interface itself can say.
Retention
Habit without a notification
The flow must plant the seed of a return visit without triggering a push notification. Where in the UI do you create that pull? What does the user leave with?
Originality
Zero reference to existing products
We will search Dribbble, Behance, and the App Store for anything that looks like your submission. A 60% visual match to an existing product is a disqualification without appeal.
Copy
Every word is a design decision
The UI copy in your screens must be final, not placeholder text. "Welcome back, [Name]!" is a placeholder. Write the actual string the user sees on their very first app open.
Device constraint
360px wide, Android-first
Design at 360×800. No iPhone notch aesthetics, no safe areas. If your layout breaks or looks different at 360px, it fails the constraint test outright.
Evaluation
30%
Brief quality & problem framing
25%
Originality & visual thinking
20%
First-action screen depth
15%
Decision log quality
10%
Error & empty state craft
Submissions using a UI kit, borrowing layout from any known product, or containing placeholder copy will not advance. The decision log is not optional — a submission without one is treated as incomplete regardless of how polished the screens appear.

You are building a webhook dispatcher service. Clients register endpoint URLs and event types they want to receive. When your service ingests an event, it fans it out to all matching registered endpoints — reliably, ordered where required, and with correct retry behaviour when endpoints fail or time out.

The difficulty is in the guarantees: at-least-once delivery, strictly ordered delivery per-subscriber for sequenced events, back-pressure handling when a downstream is slow, deduplication when a retry succeeds after the original also got through, and a delivery log that remains consistent even when your dispatcher crashes mid-fan-out. There is no off-the-shelf solution you can wrap — you build the engine.

What you must build
  • R1
    Subscriber registration APIPOST /subscribe takes a target URL, a list of event types, and an optional HMAC signing secret. Registration must be idempotent — posting the same URL and event type twice has exactly zero side effect.
  • R2
    Event ingest APIPOST /event accepts { type, payload, sequence_id? } and returns 202 immediately. Fan-out is asynchronous. The ingest endpoint must never block on delivery latency.
  • R3
    Delivery engine with exponential backoff — On failure (non-2xx or timeout), retry: 10s → 30s → 2min → 10min → 1hr. After 5 consecutive failures, mark as dead. Log each attempt with timestamp, HTTP status, response body (truncated to 500 chars), and latency in ms.
  • R4
    Ordered delivery for sequenced events — Events with a matching sequence_id must be delivered to each subscriber strictly in order. A delivery failure on event N must block event N+1 for that subscriber until N succeeds or goes dead. Out-of-order delivery is a hard failure.
  • R5
    HMAC payload signing — If a subscriber registered with a secret, sign every outbound payload: header X-Webhook-Signature: sha256=<hmac>. Include a standalone verification helper function in your README with a usage example.
  • R6
    Delivery log & replay APIGET /deliveries?subscriber=&status=&page= returns paginated history. POST /replay/:delivery_id re-triggers a dead delivery as a new attempt — not a mutation of the original record.
What makes this hard
Ordering
Sequence guarantee under failure
Subscriber A receives event seq=1 successfully. seq=2 fails. seq=3 arrives. Do you hold seq=3, queue it, or drop it? Your answer must be consistent, written in the README, and coded exactly to that spec.
Deduplication
Double delivery on retry
Original attempt times out at 29s. Retry fires at 30s. The original actually succeeded but its response arrived late. The endpoint gets the event twice. How do you detect and suppress the duplicate?
Back-pressure
Slow subscriber
A subscriber responds in 8s consistently. 200 events arrive in 10s. Do you queue unboundedly, shed load, or circuit-break? Define your policy and implement it — do not leave this as a README note.
Crash recovery
Mid fan-out restart
Dispatcher crashes after delivering to 3 of 8 subscribers for one event. On restart it must deliver to the remaining 5 exactly once — not re-deliver to all 8 and not skip the 5. Demonstrate this works.
Concurrency
Worker race condition
Two workers simultaneously pick up the same delivery job from your queue. Both attempt the same delivery. Prove this is impossible in your system — not just unlikely. Show the mechanism.
Consistency
Log vs. actual delivery
Delivery HTTP call succeeds. Log write to DB fails. Your dashboard shows "pending" but the subscriber already received it. Define your source of truth and the exact recovery path from this state.
Evaluation
30%
Correctness of guarantees
25%
Failure & recovery handling
20%
Code architecture & clarity
15%
README — decisions & tradeoffs
10%
API design & test coverage
We will run a test harness: 200 events across 10 subscribers, 3 of which fail intermittently at random. We check delivery counts, order integrity, and your log for consistency. Any event delivered twice or out of order is an automatic hard failure. Any event that goes undelivered after all retries have headroom is a hard failure.

You are building a public-facing professional profile page for a person in a trust-critical profession — a doctor, chartered accountant, financial advisor, or advocate. You choose the profession. The page must feel authoritative, load progressively, and pull real live data from at least two external APIs.

You build a lightweight Express layer that orchestrates external API calls, normalises the data, and serves a single unified JSON response to your React frontend. The frontend is built from scratch — no component library, no CSS framework. Every visual element is written by hand. This is how we know you understand CSS, not just how to configure Tailwind.

What you must build
  • R1
    Profile page — React, hand-coded CSS only — Displays: name, designation, credentials, specialisations, location, contact method, and a work history timeline. Fully responsive at 360px, 768px, and 1280px. No framework. No component library. No utility class system. Every selector you use, you wrote.
  • R2
    Live data from at least two real external APIs — At least one must require an API key and return data that varies over time (not static). Good candidates: government open data APIs, news APIs filtered by specialty, a geolocation or mapping API, a calendar availability API. Justify each API choice in your README — why does it add genuine value to the profile?
  • R3
    Node.js aggregation layer — Express calls all external APIs in parallel, handles each failure independently, applies per-source caching with a justified TTL, and returns one unified JSON to the frontend. The frontend makes exactly one HTTP request per page load. Sequential API calls on the backend are a bug.
  • R4
    Progressive loading and partial failure states — The profile renders as data arrives — not behind a full-page spinner. If one API source fails, that section degrades gracefully with a meaningful, layout-preserving fallback. The failure of any single source must be invisible to the layout integrity of the rest of the page.
  • R5
    Consult request form — Collects: name, phone, issue summary (max 300 chars), preferred time slot. Client-side and server-side validation. On submit: stores to MongoDB and returns a unique booking reference. Rate-limited to 3 submissions per IP per hour — returns a structured JSON error with retry-after header if the limit is exceeded.
What makes this hard
CSS only
No framework as a safety net
Without Tailwind or Bootstrap, every layout decision is deliberate. We will inspect your CSS. Generated AI CSS output has identifiable patterns — repetitive utility-style classes, over-specified selectors, inconsistent spacing systems — and we look for all of them.
API failure
Partial data is the normal case
External APIs fail in production. Each section of your UI must have a designed, layout-safe state for "this data is unavailable right now." A blank div or a console error is not a designed state.
Caching
TTL is a product decision
News results, location data, and availability all have different staleness tolerances. Wrong TTLs either hammer the rate-limited API or show data that is hours old. Justify your TTL choice for each source.
Parallelism
Fan-out on the backend
All external API calls must fire simultaneously. A sequential implementation that takes 3s total when all APIs are healthy is a correctness failure — not a performance issue. Use Promise.all or equivalent.
Rate limiting
In-memory, no Redis
Implement IP-based rate limiting without an external store. Define your in-memory strategy clearly and be honest in your README about what happens to limits when the server restarts — we will ask about this.
Originality
Not LinkedIn, Practo, or Justdial
We know what every professional profile site in India looks like. A layout that mirrors any of them will be flagged. Your visual hierarchy, type scale, and colour decisions must be clearly your own.
Evaluation
25%
Frontend quality — CSS & layout
25%
API integration & failure states
20%
Backend architecture & caching
20%
Progressive loading & UX
10%
Form, validation & rate limiting
We will disable one of your external APIs mid-test and inspect the UI. We will load the page on a throttled 3G connection. We will submit the contact form 4 times in under a minute and verify the rate limit response. We will read your CSS — and we know what generated CSS looks like.

You are building a multi-turn chat interface in Flutter — the kind any messaging app uses. No Firebase. No pre-built chat SDK. No Supabase. No flutter_chat_ui, bubble, or any package that abstracts the chat layer. You architect the data layer, the state machine, and the UI renderer yourself.

The interface must work correctly when the network is absent, when messages are sent in rapid succession, when the app is backgrounded mid-conversation, and when the device has never synced that conversation before. Every send must feel instant — the underlying operations can be slow, the UI must not be.

What you must build
  • R1
    Custom chat screen — no chat package — Message bubbles are custom-painted or custom-composed widgets. Bubbles show: sent/delivered/read status, timestamps, and visual grouping for consecutive messages from the same sender within 60 seconds. Long messages must not overflow. Keyboard appearance must not break scroll position.
  • R2
    Optimistic send with rollback — Tap Send: message appears instantly in the UI with a "sending" indicator. Server confirms within 5s: status updates to delivered. Server fails: message becomes visually distinct with a retry option. A message must never sit in an ambiguous state with no visible feedback to the user.
  • R3
    Offline-first SQLite storage via sqflite — All received messages stored locally. On app restart, the last 50 messages load from SQLite before any network call. The app must be fully usable offline — displaying cached messages and queueing outbound ones for delivery when connectivity restores. Hive and shared_preferences are not permitted for message storage.
  • R4
    Unread count chip and smart auto-scroll — If the user is scrolled up and a new message arrives, show a "↓ N new messages" chip. Tapping it scrolls to bottom and clears the count. If the user is already at the bottom, auto-scroll silently. Both behaviours must be implemented without a package.
  • R5
    FCM notification handling in all three app states — Foreground: show an in-app banner. Background: notification in system tray, tapping opens the correct conversation at the correct scroll position — not the app home screen. Terminated: cold start directly into the right conversation. All three states must work correctly.
  • R6
    Mock server with configurable failure rate — A small Node.js or Dart server that simulates message delivery with configurable latency (ms) and failure rate (0–1). Your README must show exactly how to set failure_rate=0.8 to test rollback behaviour, and what the expected UI output is at that setting.
What makes this hard
No packages
Custom bubble rendering
flutter_chat_ui, bubble, chat_bubbles are all banned. You draw the bubbles. You manage the ListView. You handle keyboard insets. This is specifically how we test whether you understand Flutter layout versus how to configure a library.
Message state machine
5 states, no ambiguity
Each message is in exactly one of: queued, sending, sent, delivered, failed. Each state has a distinct visual. A message moving from failed back to sending on retry must not flicker or produce a duplicate entry in the list.
Keyboard & scroll
The hardest 20 lines in Flutter
When the keyboard appears, the last message must remain visible without a jump, a flicker, or a scroll to a wrong position. Get this wrong and the app feels broken within 10 seconds of use. Most submissions get this wrong.
SQLite schema
Design before you code
Your schema must support: conversations, messages, delivery status, and outbound queue. Include CREATE TABLE statements in your README. We will ask you in the interview why you made specific column choices.
Rapid sends
Concurrency in the write queue
User taps Send 5 times in 2 seconds with the mock server at 500ms latency. All 5 must appear in order, send in order, and produce no race condition in the SQLite write queue. Demonstrate this in your screen recording.
FCM three states
Most submissions handle only one
Foreground is easy. Background is testable. Terminated (cold start from a notification tap) requires deep-link routing that most developers skip. All three must work. We will test all three.
Evaluation
30%
Custom UI — no packages
25%
Offline-first & state correctness
20%
Optimistic send & rollback
15%
FCM all three app states
10%
Code structure & README
We run your app with the mock server at 80% failure rate and tap Send rapidly. We kill the app mid-send and reopen it. We put the device in airplane mode, send 3 messages, restore connectivity, and verify delivery order and count. Any chat, bubble, or offline-storage package that bypasses the stated constraints is immediate disqualification.
What You Win

One winner.
One real job.

The best submission in each domain wins a paid internship at LAWL — after clearing a technical interview. No consolation prizes. No participation trophies. One winner, one offer, one interview. This is how we hire.

◆ Winner — 1 per Domain
Internship at LAWL
The #1 submission in each domain wins a paid internship at LAWL. You work on the actual product — production codebase, real users, systems that matter.

Winners are contacted on 3 May 2026 and go through a technical interview before the offer is confirmed. The interview is a conversation about every decision in your README — so write it like someone will read it aloud in a room.

4 internships total — one winner per domain:
  • UXUI / UX Designer
  • BEBackend Engineer
  • MNMERN Engineer
  • FLFlutter Engineer
◈ Winner Certificate
Certificate of Achievement
Every domain winner receives a Certificate of Achievement — digitally signed by the LAWL founding team, verifiable, and shareable on LinkedIn.

Whether or not you join us, the certificate is a record that you won a technical contest judged against production engineering standards. No participation certs. Only winners.
Internship offers by domain
UX
UI / UX Designer
Internship at LAWL
BE
Backend Engineer
Internship at LAWL
MN
MERN Engineer
Internship at LAWL
FL
Flutter Engineer
Internship at LAWL
Certificate of Achievement — sample
LAWL Certificate of Achievement — Build for LAWL 2026 Domain Winner
// Issued digitally on 3 May 2026. Verifiable and LinkedIn-shareable.
Submit Your Work

Ready to build
something real?

No cover letter. No aptitude test. Submit your GitHub repo and a walkthrough video. If your submission is the best in your domain, we reach out directly. Deadline: 1 May 2026, midnight IST.

Submission Form
No cover letter needed. Fields marked * are required. Your GitHub repo must be public — private repos won't be reviewed. We open every submission manually.
// Repository must be PUBLIC — private repos will not be reviewed
// 3–5 min — show a happy path and at least one failure scenario
Submission received.
Your work is on its way. Results announced on 3 May 2026.
We'll be in touch within 3 business days.
Join WhatsApp