C Programming Expereince
c编程 | c++ | Algorithm代做 | thread作业 | 代做oop – 这是利用c进行训练的代写, 对c的流程进行训练解析, 是有一定代表意义的c/Algorithm/thread/java/oop等代写方向
0/22 Questions Answered
Midterm Exam Short Answer & Multiple Choice
Q1 C Programming Expereince
9 Points
To take this class, you must have previously done some object-oriented
programming in java which has a higher level of abstraction than C.
Name a total of three distinct advantages and/or disadvantages to
designing and implementing data structures in C compared to your
previous experience with programming in Java or some other higher
level programming language.
When we ask for "distinct" comparisons or "distinct" pros and cons, we
mean not to effectively repeat the same point but worded differently.
For example, if I was comparing C and C++, and listed out the
comparisons:
A pro of using c++ is that it has bool defined as a type by default
A con of using C is that it doesn't have bool defined as a type by
default
These would NOT be two distinct comparisons.
Do not use this bool example for your answer to this question.
Enter your answer here
Save Answer
Q2 System Calls
10 Points
In HW1 BufferedFileReader, the reason we maintained a buffer was so that we could minimize calls to POSIX read(). What characteristic of calling POSIX read() made it so that we wanted to minimize the amount of times we called that function? Be specific.
Enter your answer here
Save Answer
Q3 Threads vs Processes 10 Points
When we went over threads, we discussed the idea of using threads in a server implementation. Whenever a client connected, we created a thread to handle that client. Say that we used a multi-processed server instead where we forked processes instead of creating threads. Provide one reason why this implementation (using processes) might be preferred in some cases. Provide at least one reason why using threads might be preferred in some cases. You must adequately justify your answer and your answers must be "distinct" (see question 1 for an explanation of "distinct"). For example, you cannot just say Processes have their own independent address spaces, you must also say why this would be good and/or bad for the server implementation.
Q3.
5 Points
One distinct reason why using processes might be preferred:
Enter your answer here
Save Answer
Q3.
5 Points
One distinct reason why using threads might be preferred:
Enter your answer here
Save Answer
Q4 Synchronization 8 Points
Consider the following C++ program that compiles without error.
#include <pthread.h>
#include <iostream>
#include <cstdlib>
int x = 4 ; // shared global
void* thread_code(void* arg) {
int iters = 2 ;
while (iters > 0 ) {
x--;
iters--;
}
return nullptr;
}
int main() {
pthread_t thd0, thd1;
pthread_create(&thd0, nullptr, thread_code, nullptr);
pthread_create(&thd1, nullptr, thread_code, nullptr);
pthread_join(thd0, nullptr);
pthread_join(thd1, nullptr);
std::cout << x << std::endl;
return EXIT_SUCCESS;
}
Q4.
4 Points
What is the minimum possible value this program could print?
Save Answer
Q4.
4 Points
What is the maximum possible value this program could print?
Save Answer
Q5 Multi-threading 18 Points
Assume that we have a global array of doubles that is very big:
const size_t data_length = 1000000 ; // 1,000,
double data[data_length];
You can assume that over the course of the program, we populate the array with data.
0
1
2
3
4
A value not shown here
0
1
2
3
4
A value not shown here
At some point in the program, we want to find the inverse square root
of all data in the array.
In other words, for each index i in the array, we want to do
data[i] = 1.0 / sqrt(data[i]);
This operation is taking a long time to compute, so we explore four
different ways to implement this behaviour.
Assume for these examples we have multiple processors.
- We don’t create any threads, and instead just have a single for l oop that loops over all possible indexes, calculating the inverse square root for each index.
- We create 10 threads and have every thread handle 1/10 of the data in the loop. The first thread calculates for indexes 0 – 99,999. The second thread calculates for indexes 100,000 – 199,999 etc.
- We do the same as method #2, but we have a mutex on the overall array. Each thread needs to acquire the mutex of the overall array before it can do any calculation, and releases the mutex when the thread is completely done.
- We do the same as method #2 but instead we have 1 mutex for every index (1,000,000 mutexes) when a thread calculates the data for a specific index, it acquires the mutex for that index and then releases the mutex after it finishes that index.
Q5.1 Thread Saftey
6 Points
For each of the four implementations described above, list whether or
not it "works".
By "work", we mean that the program doesn't have any data races and
does not have any deadlocks.
Briefly justify your answer for each implementation, a sentence or two
should be sufficient.
Enter your answer here
Save Answer
Q5.2 Thread Performance 12 Points
Of the four implementations, rank them from fastest to slowest. If an implementation has a data race or possibly deadlocks, list it separately.
For example, you can write something like:
1. method 4 // fastest
- method 2
- method 3 // slowest
Deadlocks/data race:
- method 1
Assume that there are multiple processors, and you can ignore time spent creating & joining threads and context-switching. You may not ignore time spent acquiring locks or waiting on locks.
Enter your answer here
Justify your answer.
Enter your answer here
Save Answer
Q6 Scheduling 17 Points
Assume we have a Round Robin scheduling Algorithm with a quantum of 2 units of time on a single-CPU machine, and are scheduling the
following processes:
Process A arrives at time 0. After running for 3 units of time, it blocks
for I/O for 3 units, then runs for 1 unit before completing.
Process B arrives at time 1. After running for 1 unit of time, it blocks
for I/O for 7 units, then runs for 3 units before completing.
Process C arrives at time 6. It is completely CPU-bound, i.e. it does
not block for I/O, and runs for 4 units.
In this algorithm, if there are multiple processes to add to the ready queue at the same time, assume they are put into the queue in this order:
any arriving processes are put into the queue first
any processes that just finished blocking for I/O are put into the
queue next
any process that just finished its time slice is put into the queue last
Assume that context switching is instantaneous and that multiple processes can block for I/O simultaneously, i.e. a process that is blocking does not prohibit other processes from blocking.
When we say a process "blocks for I/O for N units of time", that means that the specified process cannot be scheduled to run until N time units have passed since it started to block for I/O.
Note: we recommend drawing out a diagram like what was done in the midterm review in lecture
Q6.
3 Points
What is the finishing time for process A?
Please answer with a number only:
Enter your answer here
Save Answer
Q6.
3 Points
What is the finishing time for process B?
Please answer with a number only:
Enter your answer here
Save Answer
Q6.
3 Points
What is the finishing time for process C?
Please answer with a number only:
Enter your answer here
Save Answer
Q6.
8 Points
In this scenario, there is at least one time unit that is spent not executing any process. A scheduler would typically not want to have such a case happen since it means it may take longer for all jobs to be completed.
Is there any way the processes could be executed to have less time units where the CPU is not executing any process (and thus finish all processes more quickly overall)?
You do not have to follow a specific algorithm for this, and you can assume the scheduler has perfect information about the processes, but each process still behaves the same as described above (same arrival time, blocking time, etc.)
Please briefly justify your answer. You do not need to write a proof for this question, A few (2-3) sentences can be enough.
Enter your answer here
Save Answer
Q7 Virtual Memory 14 Points
Consider a system with the following configuration:
64 bit address space (one address is 64 bits)
32 GB physical memory
16 bit (2 bytes) addressability
4 KB page size
Note that we have an addressability of 16 bits (2 bytes), in the VM lecture, we usually worked with 8-bit addresability. e.g. one address corresponded to one byte in memory
As a reminder: 1KB = 2^10 bytes 1GB = 2^30 bytes (2^10 is 2 to the power of 10 or 1024)
Q7.
3 Points
How many entries are there in each process’ page table? express your answer as a power of 2 (for example. 2 to the power of 3 should be 2^3)
Enter your answer here
Save Answer
Q7.
3 Points
How many page frames are there in physical memory? Express your answer as a power of 2 (for example. 2 to the power of 3 should be 2^3)
Enter your answer here
Save Answer
Q7.
3 Points
How many bits are needed to represent the page offset of an address? (e.g. how many bits does it to take to represent the number of possible addresses that correspond to a single page)
Please answer with a number only:
Enter your answer here
Save Answer
Q7.
2 Points
How many bits are needed to represent the virtual page number of an address?
Please answer with a number only:
Enter your answer here
Save Answer
Q7.
2 Points
How many bits are needed to represent the physical page number of an address?
Please answer with a number only:
Enter your answer here
Save Answer
Q7.
1 Point
How many bits does it take to represent a physical address?
Please answer with a number only:
Enter your answer here
Save Answer
Q8 Caching 10 Points
We have a memory cache that can hold 4 blocks of memory. Assume that the cache was initially empty (that there were no blocks currently in it) and then the following sequence of blocks was accessed. Each block is represented by a letter.
(Note that you can think of a memory cache as acting similar to what physical memory does when storing page frames. One of the biggest things to consider with a cache management, and with physical memory management, is the replacement policy!)
// First Block A Block B Block C Block E Block F Block A
Block C Block C Block D Block F Block A Block D Block E Block B Block F Block C // Last
Q8.1 LRU Replacement 5 Points
If the system used LRU (Least Recently Used) as a replacement policy, how many evictions would have occurred over the course of the block accesses shown above? (In other words, how many times would a block have to be evicted from the cache to make room for some other block?)
Please answer with a number only:
Enter your answer here
Save Answer
Q8.2 MRU Replacement 5 Points
Let’s say the cache used a different policy, MRU (Most Recently Used). How many evictions would occur over the course of the block accesses above?
MRU can be thought of as the "opposite" of LRU. Where the block that was most recently used is evicted from the cache if another block needs to be brought in. (e.g. if block A was most recently accessed, and we then accessed a block not in the cache, block A would be evicted to make space for the new block)
Please answer with a number only:
Enter your answer here
Save Answer
Q9 Time 1 Point
Travis did their best to make the exam take a reasonable amount of time. He may have been a bit off in his estimation though. Approximately how much time did you actively spend working on the exam (both the short answer and the coding parts)? We would greatly appreciate it if you let us know! All non-empty answers will get full points for this question.
Answer in hours and/or minutes
Enter your answer here
Save Answer
Q10 Bonus 3 Points
Create a piece of art (e.g. drawing, poem, anything you like)!
If you are unsure of what to make, select one member of the course staff and make art about that person.
(All answers, even blank answers, will get full credit for this question. We won’t judge what is and isn’t "art")
Text for answer:
Enter your answer here
If you want to upload an image or some sort of file:
Please select file(s) Select file(s)
Save Answer
Save All Answers Submit & View Submission