Grammarly

Grammarly Software Engineer Phone Screen Questions

11+ questions from real Grammarly Software Engineer Phone Screen rounds, reported by candidates who interviewed there.

11
Questions
7
Topic Areas
10+
Sources

What does the Grammarly Phone Screen round test?

The Grammarly phone screen typically lasts 45-60 minutes and evaluates core Software Engineer fundamentals. Candidates should expect 1-2 algorithmic problems, basic system design discussion at senior levels, and questions about relevant experience. The goal is to confirm technical competence before bringing candidates onsite.

Top Topics in This Round

Grammarly Software Engineer Phone Screen Questions

Two questions for easy eloquence Drinking slurry Bad drinking period

The phone interview lasted one hour. The interviewer was a very friendly MIT girl (ABC). She had worked at her original startup for almost a year after it was acquired by the Grammys. The interview co

## Problem Given a string `s` and an integer `k`, return all unique substrings of exactly length `k`, sorted lexicographically. ```python def all_k_substrings(s: str, k: int) -> List[str]: ... ``` **Example:** ``` Input: s="abcabc", k=3 Output: ["abc", "bca", "cab"] # Sliding window gives: "abc","bca","cab","abc" -> unique sorted Input: s="aaaa", k=2 Output: ["aa"] Input: s="ab", k=5 Output: [] # k > len(s) ``` ## Approach Slide a window of size `k` across `s` and collect all substrings into a set, then sort. Time O(n*k) for hashing, O(m log m * k) for sorting where m = number of unique substrings. ## Follow-ups 1. How would you use a rolling hash to reduce per-window work to O(1)? 2. If `k` is very large and the string has many repeats, what is the maximum number of unique substrings? 3. Extend: return each unique substring along with its frequency (count of occurrences). 4. How does a suffix array solve this problem and what is its time complexity?

## Problem In an array of n integers from 1 to n with one duplicate and one missing, find both the duplicate and the missing number. ## Likely LeetCode equivalent LC 645 - Set Mismatch (>80% confident) ## Tags arrays, math, hash_table

## Problem Compute the nth Fibonacci number using recursion, memoization, or iterative DP. ## Likely LeetCode equivalent LC 509 - Fibonacci Number (>80% confident) ## Tags recursion, dynamic_programming, math

## Problem Given a list of time points in HH:MM format, find the minimum difference in minutes between any two time points. ## Likely LeetCode equivalent LC 539 - Minimum Time Difference (>80% confident) ## Tags math, sorting, strings

## Problem Given a list of integers and a reduction rule, repeatedly apply the rule until only one element remains. Return that element. **Rule:** In each pass, scan left to right. Remove any element that is smaller than its right neighbor. If no element is removed in a pass, stop. Return the last remaining element. ```python def reduce_list(nums: List[int]) -> int: ... ``` **Example:** ``` Input: [3, 1, 4, 1, 5, 9, 2, 6] Pass 1: remove 3 (3<4)? No, 3>1. Remove 1 (1<4) -> [3,4,1,5,9,2,6] remove 1 (1<5) -> [3,4,5,9,2,6] remove 2 (2<6) -> [3,4,5,9,6] Pass 2: no element < right neighbor that qualifies -> stops at [3,4,5,9,6] # Note: clarify exact rule with interviewer. Simpler variant: nums=[5,3,1], remove smallest each pass [5,3,1] -> remove 1 -> [5,3] -> remove 3 -> [5] -> output 5 ``` ## Follow-ups 1. Prove the invariant: what property does the remaining list always satisfy after each pass? 2. What is the worst-case number of passes for a list of length n? 3. How does the answer change if you remove elements larger than their right neighbor instead? 4. How would you parallelize the reduction using a segment tree?

## Problem Remove adjacent duplicate characters from a string repeatedly until no adjacent duplicates remain, using a stack. ## Likely LeetCode equivalent LC 1047 - Remove All Adjacent Duplicates In String (>80% confident) ## Tags stack, strings

## Problem Implement a text splitter that segments a long document into chunks suitable for ML model ingestion. Given a document string and a `max_chunk_tokens` limit, split the text while: - Preserving sentence boundaries (do not cut mid-sentence). - Keeping paragraphs together when they fit. - Adding configurable overlap (last `overlap_tokens` tokens of previous chunk appear at the start of the next). ```python def split_text( text: str, max_chunk_tokens: int, overlap_tokens: int = 50 ) -> List[str]: ... ``` **Example:** ``` text = "Hello world. How are you?\n\nThis is paragraph two. It has two sentences." split_text(text, max_chunk_tokens=10, overlap_tokens=2) # -> ["Hello world. How are you?", # "are you? This is paragraph two.", # "paragraph two. It has two sentences."] # (token counts approximate) ``` ## Follow-ups 1. How do you estimate token count without running a full tokenizer? When does the approximation break? 2. How would you handle code blocks or tables inside the document - should they be split differently? 3. What is the effect of `overlap_tokens` on downstream retrieval quality in a RAG pipeline? 4. How would you parallelize splitting for a 1M-document corpus?

## Problem Compare two strings after processing backspace characters, either using a stack or two-pointer approach from the end. ## Likely LeetCode equivalent LC 844 - Backspace String Compare (>80% confident) ## Tags strings, stack, two_pointers

## Problem Given an `m x n` grid of characters and a list of dictionary words, find all words that can be formed by traversing adjacent cells (up/down/left/right/diagonal). Each cell may be used at most once per word. ```python def word_match(grid: List[List[str]], words: List[str]) -> List[str]: ... ``` **Example:** ``` grid = [ ['o','a','a','n'], ['e','t','a','e'], ['i','h','k','r'], ['i','f','l','v'] ] words = ["eat", "oath", "rain", "fake"] word_match(grid, words) -> ["eat", "oath"] ``` ## Approach Build a trie from the word list. DFS from each cell, traversing the trie simultaneously. Prune paths not in the trie. Mark cells visited during a path; unmark on backtrack. Time: O(m*n*4^L) where L = max word length, but trie pruning cuts it drastically in practice. ## Follow-ups 1. Why is a trie better than checking each word separately with DFS? 2. How do you avoid reporting the same word twice if it appears multiple times in the grid? 3. How would you adapt the algorithm if diagonal movement is not allowed? 4. What is the memory cost of the trie and how do you optimize it for a 300K-word dictionary?

See All 11 Questions from This Round

Full question text, answer context, and frequency data for subscribers.

Get Access