Confluent Software Engineer Onsite Coding Questions
5+ questions from real Confluent Software Engineer Onsite Coding rounds, reported by candidates who interviewed there.
What does the Confluent Onsite Coding round test?
The Confluent onsite coding round is the core technical evaluation. Software Engineer candidates typically see 2-3 algorithm and data structure problems. Problems range from medium to hard difficulty, and interviewers evaluate both correctness and code quality.
Top Topics in This Round
Confluent Software Engineer Onsite Coding Questions
#37 Sudoku Solver
LeetCode #37: Sudoku Solver. Difficulty: Hard. Topics: Array, Hash Table, Backtracking, Matrix. Asked at Confluent in the last 6 months.
#36 Valid Sudoku
LeetCode #36: Valid Sudoku. Difficulty: Medium. Topics: Array, Hash Table, Matrix. Asked at Confluent in the last 6 months.
#1797 Design Authentication Manager
LeetCode #1797: Design Authentication Manager. Difficulty: Medium. Topics: Hash Table, Linked List, Design, Doubly-Linked List. Asked at Confluent in the last 6 months.
Confluent Onsite SWE
I have an onsite with confluent and one of the coding rounds is as follows per prep call: Type: Practical file operations. Focus: File reading and manipulation. Expectations: Solve practical problems using file APIs. Discuss...
## Problem You are building a health monitor for a distributed system. Nodes send heartbeat pings to a central registry. Implement a class that tracks heartbeats and answers liveness queries: - `ping(node_id, timestamp)` — record that `node_id` was alive at `timestamp`. - `is_alive(node_id, timestamp, timeout)` — return `True` if the node sent a ping within `[timestamp - timeout, timestamp]`. - `get_dead_nodes(timestamp, timeout)` — return all nodes that have NOT pinged within the timeout window. ```python class LivenessMonitor: def ping(self, node_id: str, timestamp: int) -> None: ... def is_alive(self, node_id: str, timestamp: int, timeout: int) -> bool: ... def get_dead_nodes(self, timestamp: int, timeout: int) -> list[str]: ... ``` ``` ping("A", 100) ping("B", 105) ping("A", 150) is_alive("A", 160, 20) -> True # last ping at 150, within 20s is_alive("B", 160, 20) -> False # last ping at 105, 55s ago get_dead_nodes(160, 20) -> ["B"] ``` ## Follow-ups 1. Nodes can re-register after being declared dead. How does your data structure handle that cleanly? 2. How would you scale this to 1 million nodes pinging every second? 3. `get_dead_nodes` is O(n) in your current implementation. Can you make it faster using a sorted structure? 4. What happens if node clocks are skewed by up to 5 seconds relative to the server?
See All 5 Questions from This Round
Full question text, answer context, and frequency data for subscribers.
Get Access