Question: 35. What is a condition variable? How does it help in thread synchronization? Answer: A condition variable is a mechanism useful for the communication and coordination of threads. It enables one thread to wait while a second thread notifies it when a specific event or condition occurs.
In the context of multi-threaded applications, condition variables are used alongside mutexes to ensure threads pause and only wake up when necessary. This approach offers several advantages:
It reduces overhead checks, which increases CPU time efficiency.
It improves the overall fluidity of control in multi-threaded applications.
Question: 36. What are the differences between the sizeof operator and the strlen function?
Answer:
Question: 37. When does the compiler not implicitly generate the address of the first element of an array?
Answer: The compiler does not implicitly generate the address of the first element of an array whenever an array name appears in the following contexts:
As an operand of the sizeof operator.
As an operand of the & (address-of) operator.
As a string literal used to initialize a character array.
Question: 38. Is using exit() the same as using return?
Answer: No, using exit() is not the same as using return.
The exit() function is used to exit your program and return control to the operating system.
The return statement is used to return from a function and return control to the calling function.
If the return statement is used within the main() function, it essentially returns control to the operating system (the calling function). In this specific case (returning from main), the return statement and the exit() function are considered similar.
Question: 39. What is an lvalue?
Answer: An lvalue (Left-value) is an expression to which a value can be assigned. The lvalue expression is always located on the left side of an assignment statement.
In contrast, an rvalue is located on the right side of an assignment statement. Every assignment statement must contain an lvalue and an rvalue. Critically, the lvalue expression must refer to a storable variable in memory and cannot be a constant.
Question: 40. What is the difference between goto, longjmp(), and setjmp()? Answer:
The source highly recommends avoiding the use of goto, longjmp(), and setjmp() as they are often an indication of poor programming practice.
Question: 41. What is XOR linked list?
Answer: An XOR linked list is a Memory Efficient Doubly Linked List. An ordinary Doubly Linked List requires space for two address fields (for the previous and next nodes). The memory-efficient version, the XOR Linked List, uses only one space for the address field with every node by utilizing the bitwise XOR operation.
In an XOR linked list, instead of storing the actual memory addresses, each node stores the XOR of the addresses of the previous and next nodes.
XOR List Representation Example (npx = XOR of next and previous addresses):
Node A: npx = 0 XOR add(B)
Node B: npx = add(A) XOR add(C)
Node C: npx = add(B) XOR add(D)
Node D: npx = add(C) XOR 0
Question: 42. What is ‘trie’ in data structure?
Answer: A Trie is an efficient information retrieval data structure. Using a trie, search complexities can be brought down to an optimal limit, which is proportional to the key length.
If keys are stored in a well-balanced Binary Search Tree (BST), the search time is proportional to $M \times \log N$ (where $M$ is the maximum string length and $N$ is the number of keys).
Using a trie, the key can be searched in $O(M)$ time.
The trade-off is that there is a penalty on the storage requirements.
Each node in a trie consists of multiple branches, where each branch represents a possible character of the keys. The last node of every key must be marked as a leaf node, often using a node field value to distinguish it.
Question: 43. What do you understand by splay tree?
Answer: A splay tree is a self-balancing Binary Search Tree (BST). The core concept of a splay tree is to bring the recently accessed item to the root of the tree. If the item is accessed again shortly after, it becomes accessible in $O(1)$ time. This concept utilizes the locality of reference principle (i.e., in typical applications, 80% of accesses are to 20% of the items).
While any single operation can take $\Theta(n)$ time in the worst case, all splay tree operations run in $O(\log n)$ time on average ($n$ being the number of entries).
Question: 44. What is a Treap?
Answer: A Treap is a Balanced Binary Search Tree that is not guaranteed to have a height of $O(\log n)$. The Treap uses Randomization and the Binary Heap property to maintain balance with high probability. The expected time complexity for search, insert, and delete operations is $O(\log n)$.
Each node in a Treap maintains two values:
Key: Follows standard BST ordering (left child is smaller, right child is greater).
Priority: A randomly assigned value that follows the Max-Heap property.
Question: 45. How to implement the LRU caching scheme? What data structures should be used?
Answer: The Least Recently Used (LRU) caching scheme is implemented by removing the least recently used frame when the cache is full and a new page (not already in the cache) is referenced.
Two data structures are used to implement an LRU Cache:
Queue: This queue is implemented using a doubly linked list.
The maximum size of the queue equals the total number of frames available (cache size).
The most recently used pages are kept near the front end.
The least recently used pages are kept near the rear end.
A Hash (Hash Map): This uses the page number as the key and the address of the corresponding queue node as the value.
Implementation Logic:
If a page is referenced and is already in memory (found in the hash), its corresponding node is detached from the list and moved to the front of the queue.
If the page is not in memory (not found in the hash), a new node is brought into memory. A new node is added to the front of the queue, and the corresponding node address is updated in the hash.
If the queue is full (all frames are full), the node from the rear of the queue (the LRU element) is removed to make space for the new node, which is added to the front.
Question: 46. Suppose, there are two linked lists: L1 and L2 (of the same length) that intersect at a particular node N1, which is a common endpoint to all other nodes. What are the possibilities to find N1?
Answer: A linear solution is possible without complex calculations. The method involves balancing the traversal length:
Initialize two pointers, P1 pointing to the first node of L1, and P2 pointing to the first node of L2.
Traverse both lists simultaneously.
If P1 reaches L1’s last node, point P1 to the first node of L2 and continue traversing.
If P2 reaches L2’s last node, point P2 to the first node of L1 and continue traversing. By redirecting the pointer of the shorter list to the head of the longer list, the difference in length is balanced, ensuring both pointers cover the same total distance. They will eventually meet at the Intersection node (N1).
Question: 47. Given two keys K1 & K2, write an algorithm to print all the elements between them with K1 <= K2 in a BST.
Answer: A linear solution is possible without using any extra space by performing an inorder traversal.
Algorithm Steps:
Perform an inorder traversal.
Once K1 is found, print it and continue the traversal.
Print all traversed elements until K2 is reached.
Pseudocode Explanation: The recursive function printRangeInBST(node, K1, K2) checks conditions before traversing:
Base Case: If the node is NULL, the function returns.
Left Subtree Condition: If the current node's value is greater than $K1$ (node.value > K1), the left subtree might contain values within the range, so it is explored.
Printing Current Node: If the current node's value is within the range $[K1, K2]$ (K1 <= node.value <= K2), the value is printed.
Right Subtree Condition: If the current node's value is less than $K2$ (node.value < K2), the right subtree might have more values within the range, so it is explored.
Question: 48. What is the difference between a mutex and a spinlock?
Answer: Both mutexes and spinlocks are used for resource access contention control during multi-threading.
Question: 49. How many stacks are required to implement a Queue.
Answer: Two stacks are required to implement a Queue.
For Enqueue: Elements are pushed onto the first stack (S1).
For Dequeue:
If the second stack (S2) is empty, all elements are popped from S1 and pushed onto S2.
The element to be dequeued is the last element popped from S1 (which is now the top of S2).
If S2 is not empty, the top element is popped directly from S2.
…till next post, bye-bye & take care.
No comments:
Post a Comment