FAANG OA Question Guide [2026]: Real Online Assessment Questions

Platform formats, difficulty breakdowns, and verified OA questions from Google, Meta, Amazon, Apple, Netflix, Stripe, OpenAI, Anthropic, and Microsoft — all sourced from real candidate reports.

Sourced from 1Point3Acres, LeetCode discuss, Reddit, and Blind. Last updated 2026.

Quick Answer

FAANG OAs differ significantly by company: Google uses a proprietary platform (2-3 problems, 90 min), Amazon uses HackerRank (2 problems, 105 min), Meta uses CodeSignal GCA (4 tasks, 70 min), Apple uses HackerRank or take-home, Netflix has no OA, Stripe uses a 3-4 hour take-home project. LeakCode has 2317+ verified OA question threads across these companies.

1. What Is an OA and Why It Matters

An Online Assessment (OA) is the automated, asynchronous coding test that most large tech companies use as the first technical screening gate. Unlike a phone screen or onsite round, an OA has no human on the other end — you complete it on your own time within a deadline window (typically 7 days from when you receive the link).

OAs serve two purposes for companies: they eliminate candidates who cannot write working code before investing recruiter and engineer time in phone screens, and they provide a standardized, defensible signal that can be referenced in later hiring decisions. For candidates, the OA is often the fastest, lowest-pressure gate in the entire interview process — the OA is the round you can clear without an audience.

However, OA performance matters more than candidates typically expect. At companies like Amazon, a strong OA score (both problems solved cleanly, no wasted attempts) is weighted in the final hiring committee packet. At Meta, your CodeSignal GCA score is a permanent data point that any company using CodeSignal can request. Treating the OA as a throwaway first step is a strategic mistake.

LeakCode has 2317+ verified OA question threads across all major tech companies. See the full OA question set at OA topic hub.

2. OA Platforms: HackerRank vs CodeSignal vs Karat vs Custom

Knowing which platform a company uses before your OA removes a layer of uncertainty. Different platforms have different IDE behaviors, test case formats, and scoring models. Here is a breakdown of the four dominant OA platforms and which companies use each:

HackerRank

Browser-based IDE with syntax highlighting and a custom test case runner. You can run your code against sample cases before submitting. Scoring is based on test cases passed — partial credit is possible in some OA configurations. Some OA versions include debug challenges (fix the given broken code) and code review tasks alongside standard coding problems.

Amazon Stripe Bloomberg Apple (some teams)

CodeSignal (GCA)

CodeSignal's General Coding Assessment (GCA) is a standardized test: 4 tasks ramping from easy to very hard, 70 minutes total. Your score (0-850 scale) is attached to your CodeSignal profile and can be shared with multiple companies. Important: if you score well on a GCA, you can send that score directly without retaking the OA at each company. If you score poorly, a new invite from a different company gives you a fresh attempt.

Meta Robinhood DoorDash Lyft (some pipelines)

Karat

Karat is not a self-paced OA — it is a live phone screen conducted by professional Karat interviewers (not employees of the hiring company). You will schedule a time, join a video call, and solve 1-2 problems in a browser-based IDE. Karat interviews feel like a standard coding phone screen, but your interviewer is a Karat employee following a standardized rubric. Results are forwarded to the hiring company as a structured report.

Meta (university programs) Confluent Carta

Custom / Proprietary Platforms

Google and Stripe use proprietary platforms for their OAs. Google's custom platform has a text editor (no autocomplete, basic syntax highlighting), and the OA interface closely mirrors what candidates use in later onsite rounds (Google Docs or a similar plain-text environment). Stripe's take-home is delivered via email/ZIP with instructions — no platform IDE required. OpenAI and Anthropic have used bespoke take-home projects.

Google Stripe OpenAI Anthropic

3. Per-Company OA Breakdowns

Google OA

Platform: Google's proprietary coding platform. Format: 2-3 problems in 90 minutes. Who gets it: New grad and intern applicants primarily; experienced hire pipelines skip directly to phone screen.

Difficulty range: easy to medium-hard. Most candidates report one easy or easy-medium warmup and one medium or medium-hard main problem. Partial solutions that are well-explained typically advance candidates. Google's OA platform records problem-solving behavior (read time, edit patterns), not just output.

See all Google OA questions: /company/google/oa. Full Google prep: Google Interview Prep Guide 2026.

Amazon OA

Platform: HackerRank. Format: 2 coding problems (105 minutes) plus a Work Simulation module (optional in some pipelines) and a Work Styles Assessment (personality questionnaire, ~15 min). Who gets it: SDE New Grad, SDE I, SDE II pipelines; some experienced hires also get it.

Amazon OA coding problems are typically medium difficulty — think sliding window, greedy interval scheduling, or graph traversal problems. Amazon OAs have been reported to include a "debugging" section where you fix broken code in 30 minutes and a "code review" section where you identify bugs in provided code. These sections appear in SDE I pipelines more frequently than SDE II.

See all Amazon OA questions: /company/amazon/oa. Amazon full guide: Amazon Interview Guide.

Meta OA (CodeSignal GCA)

Platform: CodeSignal (standardized GCA). Format: 4 tasks in 70 minutes. Task 1 is easy (10-12 min), Task 2 is easy-medium (15 min), Task 3 is medium-hard (20+ min), Task 4 is hard (most candidates do not complete it). Who gets it: University recruiting and new grad pipelines; experienced hires typically skip to phone screen.

Meta's GCA cutoff is not publicly published but is widely reported to be around 600+ (out of 850). Completing Tasks 1-3 cleanly with a few minutes on Task 4 typically hits this threshold. Key strategy: do not optimize Task 1-2 beyond a working solution — there is not enough time.

See all Meta OA questions: /company/meta/oa. Meta full guide: Meta Interview Guide.

Apple OA

Platform: HackerRank or take-home (varies by team). Format: No standardized format. Some Apple teams send a 2-3 problem HackerRank OA; others send a take-home coding project with a multi-day deadline. Who gets it: New grad and some experienced hire pipelines, especially for software platform and developer tools roles.

Apple's OA is the least standardized among the major tech companies. Difficulty and time limits vary significantly by team and role. Candidates targeting iOS/macOS roles may receive Swift-specific OA questions. See all Apple questions: /company/apple/oa.

Netflix: No OA

Netflix does not use an OA. Their process begins with a recruiter call, then directly to a technical phone screen. Netflix's interview philosophy is to move fast and avoid process overhead — consistent with their culture of high trust and low overhead. The absence of an OA means that Netflix phone screens serve as the first technical filter. See Netflix questions.

Stripe Take-Home OA

Platform: Custom take-home (email/ZIP delivery). Format: 3-4 hour implementation task with a multi-day deadline. What they test: real-world software engineering — API design, data modeling, clean code, error handling. Not LeetCode-style algorithmic problems.

Stripe OA tasks are typically open-ended implementation problems: build a simplified payment processing module, parse and validate a configuration file, implement a simplified in-memory store. Stripe reviewers explicitly care about code quality, test coverage, and documentation quality — not just whether your solution passes automated tests.

See verified Stripe OA reports: /company/stripe/oa. Stripe full guide: Stripe Interview Guide.

OpenAI and Anthropic OA

Both companies use take-home style assessments for some pipelines. OpenAI's OA has been reported as a multi-hour technical project, often ML-adjacent or system design focused. Anthropic similarly uses take-home coding tasks that test software engineering fundamentals with some AI/ML context. Both companies move quickly when impressed by a take-home. See OpenAI questions and Anthropic questions.

Microsoft OA

Platform: HackerRank (most pipelines). Format: 3 coding problems in 60-90 minutes. Microsoft's OA difficulty is medium, with some hard problems for SDE II+ pipelines. Microsoft also includes a Problem Solving module alongside coding for some roles. See Microsoft OA questions.

4. Common OA Question Types (With Real Data)

Based on LeakCode's verified OA question threads, the following topic patterns appear most frequently across FAANG OAs:

Arrays and Hash Maps

Appears in virtually every FAANG OA. Two-sum variants, sliding window maximum, frequency counting, anagram detection. These problems test hash map fluency — the candidate's ability to trade space for time instantly.

Browse Arrays questions
String Manipulation

String parsing, pattern matching, substring problems. Minimum window substring, valid parentheses, decode string. Amazon OAs lean heavily on string simulation; Google OAs frequently include string + hash map combinations.

Browse String questions
Graphs and BFS/DFS

Grid traversal, connected components, shortest path problems. Common in Google and Meta OAs for harder problem slots. Number of islands, word ladder, course schedule. Candidates who know BFS vs DFS trade-offs instinctively handle these well under time pressure.

Browse Graph questions
Dynamic Programming

Appears in Meta GCA Task 4 and Google OA hard slots. Coin change, house robber, knapsack variants. Most OAs do not require novel DP insight — recognizing the pattern and applying a known template is sufficient. Memorize 5-8 DP patterns: Fibonacci, 0/1 knapsack, LCS, LIS, interval DP.

Browse DP questions
Sorting and Greedy

Amazon OAs frequently involve scheduling or interval problems solved with greedy sorting. Meeting rooms, task scheduler, minimum cost to connect points. These problems require recognizing when a greedy choice is provably optimal — a non-obvious skill that requires explicit practice.

See topic-specific question lists: Arrays, Graph, Dynamic Programming, Strings. See all topic hubs at /topics/.

5. Time Management Strategy

Time allocation is the skill that separates OA passers from OA failures. Most candidates who fail OAs do not fail because they cannot solve the problems — they fail because they spent 80% of their time on Problem 1 and submitted a blank Problem 2.

Amazon OA (105 min, 2 problems)

  • Problem 1: 30 minutes maximum. Get a working solution, not an optimal one.
  • Problem 2: 55 minutes. This is the harder problem. Brute force first, then optimize if time permits.
  • Last 20 minutes: Add edge cases, clean up code, rerun provided test cases.
  • Work simulation (if included): Allocate 15-20 minutes after coding. Do not rush it.

Google OA (90 min, 2-3 problems)

  • Problem 1 (easy): 15-20 minutes. Clean implementation, all edge cases.
  • Problem 2 (medium): 45-50 minutes. Full explanation in comments, optimize if possible.
  • Problem 3 if present (medium-hard): 20-25 minutes. Attempt partial solution over blank.
  • Google's platform may not give partial credit by test case — submit clean solutions, not debug-in-progress code.

Meta CodeSignal GCA (70 min, 4 tasks)

  • Task 1 (easy): 10 minutes maximum. First-pass correct solution, no micro-optimization.
  • Task 2 (easy-medium): 12 minutes. Brute force acceptable if it passes all test cases.
  • Task 3 (medium-hard): 25 minutes. This is the score-determining task. Do not give up early.
  • Task 4 (hard): Whatever time remains. Attempt partial solution — even 1-2 test cases passed matters for score.
  • CodeSignal deducts for additional test runs and runtime errors — test mentally before running.

6. Real OA Questions from LeakCode

The questions below were reported by candidates as actual OA problems at major tech companies. These come from LeakCode's verified candidate reports — real problems, real contexts.

Job finished
google Career,job Search,behavioral,oa,leetcode 2026
Best Resources/Materials For FAANG Companies
microsoft Oa,preparation,coding,resources 2026
Extremely Frustrated with Meta Interview Process
meta Security,behavioral,leetcode,interview Process,internship 2025
A Straightforward Guide To Getting Your First FAANG Offer
google System Design,behavioral,dynamic Programming,binary Search,hash Table,graph 2025
[Officially Live] Meta’s New AI-enabled Coding Round: What I’ve Learned So Far
google Meta,ai Coding,interview Format,debugging,unit Testing 2025
Detailed Prep Breakdown: Startup Job > Big Tech Offers
meta Graph,strings,binary Tree,system Design,dynamic Programming,behavioral 2025
New Grad Meta Interview Experience
meta Interview Experience 2024

See all OA questions by company: Google, Amazon, Meta, Microsoft, Stripe.

7. How to Use LeakCode for OA Prep

LeakCode's OA-specific filtering lets you drill directly on the questions your target company has actually asked — not a generic LeetCode list. Here is the most effective prep workflow:

  1. 1

    Go to your target company's OA page

    Navigate to /company/{company}/oa (e.g., /company/google/oa). This shows only OA-tagged questions from that company, sorted by frequency.

  2. 2

    Read full thread context for each question

    OA questions on LeakCode include the original candidate's report: what platform they used, how much time they had, what follow-ups were asked (if any), and whether they passed. This context tells you the actual difficulty level and what a "passing" attempt looked like.

  3. 3

    Practice under timed, no-autocomplete conditions

    If you identified 20 high-frequency OA questions for your target company, practice them in a plain text editor with a timer. Simulating the exact OA conditions (no IDE, time pressure) trains you to write cleaner code faster than any un-timed practice session.

  4. 4

    Cross-reference with topic hubs for pattern drilling

    If the OA questions cluster around graphs, go to /topic/graph and drill all verified graph questions across companies. Pattern drilling outperforms one-off problem solving for OA prep.

For source transparency: /sources. For verification methodology: /methodology. For a deep-dive on where interview questions originate: Where Real Leaked Questions Come From.

8. Frequently Asked Questions

What platform does Google use for its OA?

Google uses a proprietary coding platform — not HackerRank or CodeSignal. It has a custom IDE with no autocomplete. The OA is used for new grad and intern candidates, 2-3 problems in 90 minutes.

Does Meta use CodeSignal for its OA?

Yes. Meta uses CodeSignal's General Coding Assessment (GCA): 4 tasks in 70 minutes, scored 0-850. Your GCA score is attached to your CodeSignal profile and can be shared with multiple companies. Meta's widely reported minimum threshold is around 600+.

How hard is Amazon's OA?

Amazon OA coding problems are medium difficulty: sliding window, greedy interval scheduling, graph traversal. The harder challenge is time management — 105 minutes for 2 problems plus optional work simulation and work styles modules. Candidates who practice similar problems under time pressure consistently pass.

Is Stripe's OA LeetCode-style?

No. Stripe's take-home OA is a multi-hour implementation project (3-4 hours), not a timed competitive coding test. It tests real software engineering: API design, clean code, error handling, and test coverage. Stripe explicitly values code quality over raw algorithmic performance.

Can I use LeakCode to see actual OA questions?

Yes. LeakCode aggregates OA questions reported by real candidates from verified public sources. Navigate to /company/{company}/oa for company-specific OA questions sorted by frequency. All questions link to original source threads.

See Full OA Question Threads

Get access to complete OA question text, candidate context, and company-specific frequency data for all 2317+ verified OA reports.

Get Access