Which of the following is not a money market security?
Category: Uncategorized
During a design review, the team is struggling to decide whe…
During a design review, the team is struggling to decide whether security is more important than performance. The moderator introduces a Utility Tree.What is/are the purpose of using a Utility Tree in an ATAM-style review?
Exokernel/SPIN/L3 Based on the design principles outlined in…
Exokernel/SPIN/L3 Based on the design principles outlined in the SPIN, Exokernel, and L3 papers, imagine that you are tasked with implementing a packet multiplexer. You want this to be fast since it sits on the critical path (examining every packet). a) [3 points] Explain how you will implement this packet filter in (i) SPIN, (ii) Exokernel, and (iii) a microkernel.
Potpourri Answer the following question on scheduling policy…
Potpourri Answer the following question on scheduling policy. a) [2 points] Identify a pro and a con for the fixed-processor scheduling policy.
SPIN A friend says that he recalls that there were two major…
SPIN A friend says that he recalls that there were two major strikes against SPIN in the way it handles accessing the endpoints of object interfaces (Create(), Combine(), Resolve()). He says this results in SPIN being not performant and also unsafe due to unprotected memory access between extensions on top of SPIN. a) [4 points] With succinct bullets,explain how SPIN creates protection domains and use this to explain to your friend whether he was correct or not with regards to 1) the performance of SPIN OS and 2) the safety in regard to the isolation of the protection domains.
M.E. Lock The context for this question is the same as the p…
M.E. Lock The context for this question is the same as the previous question. Given: 32-core cache-coherent bus-based multiprocessor Invalidation-based cache coherence protocol Architecture supports atomic “Test-and-set (T&S)”, atomic “Fetch-and-add (F&inc)”, and atomic “fetch-and-store (F&St)” operations. All these operations bypass the cache. An application has 32 threads, one on each core. ALL threads are contending for the SAME lock (L) Each lock acquisition results in 100 iterations of the spin loop for each thread The questions are with respect to the following spin-lock algorithms (as described in the MCS paper, and restated below for convenience): Spin on Test-and-Set: The algorithm performs a globally atomic T&S on the lock variable “L” Spin on Read: The algorithm, on failure to acquire the lock using T&S, spins on the cached copy of “L” until notified through the cache coherence protocol that the current user has released the lock. Ticket Lock: The algorithm performs “fetch_and_add” on a variable “next_ticket” to get a ticket “my_ticket”. The algorithm spins until “my_ticket” equals “now_serving”. Upon lock release, “now_serving” is incremented to let the spinning threads that the lock is now available. MCS lock: The algorithm allocates a new queue node, links it to the head node of Lock queue using “fetch-and-store”, sets the “next” pointer of the previous lock requestor to point to the new queue node, and spins on a “got_it” variable inside the new queue node if the lock is not immediately available (i.e., the Lock queue is non-empty). Upon lock release, using the “next” pointer, the next user of the lock is notified that they have the lock. d) [2 points] This pertains to the “MCS lock” algorithm. Answer True/False with justification. No credit without justification. At lock release, if the “next” pointer is “nil” it is safe for the MCS lock algorithm to assume that there are no other threads waiting for this lock.
M.E. Lock The context for this question is the same as the p…
M.E. Lock The context for this question is the same as the previous question. You have designed a bus-based custom non-cache-coherent shared memory DSP (Digital Signal Processor). Each CPU in the DSP has a private cache. The hardware provides the following primitives for the interaction between the private cache of a CPU and the shared memory: fetch(addr): Pulls the latest value from main memory into the cache flush(addr): Pushes the value at addr in the cache to main memory; it does not evict it from the cache hold(addr): Locks the memory bus for addr; no other core can fetch or flush this address until released unhold(addr): Releases the lock on addr You got this generic implementation for a ticket lock algorithm and tried it on your architecture. It did not work. struct ticket_lock { int next_ticket; // The next ticket number to give out int now_serving; // The ticket number currently allowed to enter}; void lock(struct ticket_lock *l) { // Acquire ticket int my_ticket = l->next_ticket++; // Wait for turn while (l->now_serving != my_ticket) { // Spin }} void unlock(struct ticket_lock *l) { l->now_serving++; // Release} b) [1 point] Identify any one potential flaw in the unlock function when implemented on your architecture.
M.E. Lock The context for this question is the same as the p…
M.E. Lock The context for this question is the same as the previous question. Given: 32-core cache-coherent bus-based multiprocessor Invalidation-based cache coherence protocol Architecture supports atomic “Test-and-set (T&S)”, atomic “Fetch-and-add (F&inc)”, and atomic “fetch-and-store (F&St)” operations. All these operations bypass the cache. An application has 32 threads, one on each core. ALL threads are contending for the SAME lock (L) Each lock acquisition results in 100 iterations of the spin loop for each thread The questions are with respect to the following spin-lock algorithms (as described in the MCS paper, and restated below for convenience): Spin on Test-and-Set: The algorithm performs a globally atomic T&S on the lock variable “L” Spin on Read: The algorithm, on failure to acquire the lock using T&S, spins on the cached copy of “L” until notified through the cache coherence protocol that the current user has released the lock. Ticket Lock: The algorithm performs “fetch_and_add” on a variable “next_ticket” to get a ticket “my_ticket”. The algorithm spins until “my_ticket” equals “now_serving”. Upon lock release, “now_serving” is incremented to let the spinning threads that the lock is now available. MCS lock: The algorithm allocates a new queue node, links it to the head node of Lock queue using “fetch-and-store”, sets the “next” pointer of the previous lock requestor to point to the new queue node, and spins on a “got_it” variable inside the new queue node if the lock is not immediately available (i.e., the Lock queue is non-empty). Upon lock release, using the “next” pointer, the next user of the lock is notified that they have the lock. c) [2 points] This pertains to the “Ticket Lock” algorithm. One thread is in the critical section governed by the lock. All the other threads are spinning waiting their turns. How many cache reload operations happen upon lock release? No credit without justification.
Full Virtualization Suppose you are a cloud provider using f…
Full Virtualization Suppose you are a cloud provider using full virtualization to host multiple tenants on the same physical hardware. a) [2 points] A malicious tenant gains root privileges within their own virtual machine andattempts to access the memory of another VM on the same host. Describe one mechanism by which full virtualization prevents this unauthorized memory access.
What type of microstructure does this image represent?
What type of microstructure does this image represent?