Ixl Software Engineer Phone Screen Questions
8+ questions from real Ixl Software Engineer Phone Screen rounds, reported by candidates who interviewed there.
What does the Ixl Phone Screen round test?
The Ixl phone screen typically lasts 45-60 minutes and evaluates core Software Engineer fundamentals. Candidates should expect 1-2 algorithmic problems, basic system design discussion at senior levels, and questions about relevant experience. The goal is to confirm technical competence before bringing candidates onsite.
Top Topics in This Round
Ixl Software Engineer Phone Screen Questions
This post was last edited by Anonymous on 2025-09-25 21:15. After reading the posts on the forum, it seems my situation is different from everyone else's. I initially applied for positions in NC, and
IXL SWE Phone - HTML Parser
## Problem Parse an HTML string to extract structure, validate tag nesting, or transform content. ## Likely LeetCode equivalent No close equivalent. ## Tags coding, strings, parsing, stack, phone-screen
IXL SWE Phone - Log IDs
## Problem Process log entries to extract, deduplicate, or aggregate unique IDs from a log stream. ## Likely LeetCode equivalent No close equivalent. ## Tags coding, arrays, hash-table, phone-screen
IXL SWE Phone - Max Level Coverage
## Problem Find the level in a binary tree with maximum coverage or sum, using BFS level-order traversal. ## Likely LeetCode equivalent Related to LC 662 Maximum Width of Binary Tree. ## Tags coding, binary-tree, BFS, phone-screen
## Problem Randomly place items in a grid or array according to given constraints, ensuring uniform distribution. ## Likely LeetCode equivalent No close equivalent. ## Tags coding, math, randomization, phone-screen
## Round 1 - System Design ## Problem Design a scoring server that receives raw feature vectors in real time and returns model scores within 20ms p99. The model is a gradient boosted tree (100 MB serialized). The server handles 50K requests/sec at peak. ## Requirements - Latency: p99 < 20ms end-to-end (network + inference). - Throughput: 50K RPS peak, 10K RPS average. - The model is updated daily; zero-downtime rollout required. - Feature input: JSON payload, ~50 float fields. ## Design Points ``` Load Balancer -> Scoring Fleet (stateless workers) Workers: deserialize JSON -> validate -> run model -> return score Model loaded in-process (no subprocess call) Blue/Green deploy: new model warmed up, traffic shifted atomically ``` ## Discussion Questions - How do you manage model warm-up time when spinning up new instances? - How do you validate incoming features for schema drift before scoring? - What metrics do you instrument: latency histogram, score distribution, error rate? ## Follow-ups 1. How do you A/B test two model versions in production with consistent user assignment? 2. What happens when a feature is missing in the payload — impute, reject, or score with default? 3. How do you handle a latency spike caused by a single slow feature transformation? 4. How would the design differ for a deep learning model that requires a GPU?
## Problem Maintain a real-time leaderboard of student submissions, supporting efficient top-K queries and score updates. ## Likely LeetCode equivalent Related to LC 1244 Design A Leaderboard. ## Tags coding, heap, design, phone-screen
IXL SWE Phone - Text Processor
## Problem Process text input through a series of transformation rules such as tokenization, substitution, or formatting. ## Likely LeetCode equivalent No close equivalent. ## Tags coding, strings, parsing, phone-screen
See All 8 Questions from This Round
Full question text, answer context, and frequency data for subscribers.
Get Access