COMP30023: Introduction to projects
作业project | assignment代做 | lab作业 – , 这个项目是lab代写的代写题目
COMP30023
1 Introduction
In this document, we will introduce how COMP30023 projects are structured, and outline how you can make the most from the infrastructure which we provide.
Firstly, some attributes and expectations:
- As per the handbook, there will be 2 projects weighted 15% each
- Projects are to be completed individually
- Submissions must be in written in C
- Submissions must compile and run on COMP30023-provided VMs, and should produce deterministic output
- Submissions will be checked for plagiarism
2 Submission
The submission process may be slightly different to what you were used to. We expect:
- Code to be submitted to your assigned Git lab repository namedcomp30023-2022-project-x in the subgroup with your username of the groupcomp30023-2022-projectson gitlab.eng.unimelb.edu.au.
- AND The full-40 digit SHA1 hash of your chosen commit to the relevant project assignment on the LMS.
Failure to complete both successfully will result in mark deductions.
Upon submitting the hash, you should receive an automated comment indicating whether your submission is valid/reachable on Gitlab. An example success message is: Taking commit 2e24d35658d1b552952d226f5362fa5863d09a3c from 2021-05-19T10:21:49Z, day 0.
Note that it acknowledges the hash which you submitted and the time of submission (given to us by Canvas). From this, we calculate the day number: day 0 means that the submission is on-time, day 1 means that its 1 day after the deadline, and so on.
We do not perform calculations on fractional days, and thus, it is up to you to consider whether its worthwhile to make further submissions (if submitting late).
If you have an extension, please note that we will subtract your extension from this day number when calculating your project mark.
3 Compilation
Unlike COMP20003 and COMP20007, we do not impose the usage of skeleton files or a particular project structure (in terms of files/directories). Youre welcome to use either gccorclang, and anything in the C standard library (libc), and POSIX, provided it works on the VM, but excluding calls to other libraries/services
To ensure that your projects can be successfully tested, please make sure that there is aMakefileat the root of your repository, which compiles the executable(s) defined in the specification to the root of the repository also.
Youre welcome to write test cases/scripts (possibly in other languages) and use other build systems during implementation. Feel free to keep these files for the final submis- sion.
You can, but do not have to include copies of input/dependent files in your repository (if any), as they will be provided by the CI/marking environment.
Please do not hard code file paths for test cases or files, or make assumptions about where your executable will be launched from.
4 Testing
A breakdown of visible and hidden marks will be provided in project specifications.
We will endeavour to set up Gitlab Pipelines before the release of each project. The CI (Continuous Integration) will allow you to test your code against visible cases and receive some limited feedback before the submission deadline.
You can access the results of automated tests by following these steps:
- Ensure that.gitlab-ci.ymlis at the root of your repository. After placing.gitlab-ci.ymlat the root of your repository, every pushed com- mit should trigger automated tests against your code.
- After pushing a commit, you should see either a green tick mark, a blue pending progress indicator, or a red cross to the right of the commit information. Click this icon.
- On the next screen, click the icon again, and access thetestpipeline stage.
- This will bring you to the very bottom of the test transcript for the latest commit.
Note that the marks shown in the results table of the transcript is indicative of marks you will receive for visible test cases (and tasks). Please compare it with the number of possible marks indicated in the specification.
The CI will fail (with a red cross) when any visible test case fails, and you may receive an email about a failing pipeline from Gitlab.
We do not care about whether you use the CI when implementing your projects. However, CI usage may be taken into account when considering extensions, plagiarism, and submission-related issues.
Unless an error or alternative solution has been identified, in which case marks will be retroactively awarded (when higher) by re-testing all submissions, marks for auto- mated components are final. There will be no partial marks given for uncompilable or incorrect programs, nor implementation effort.
Please note that when your submission fails in CI, it will likely fail the same way in the marking environment. Debug your program on your VM in this case. Make sure that youre happy with what the CI reports.
5 Reading the Transcript
Here is an example transcript, for your reference. As its pretty straightforward, you may want to skip this section.
5.1 Header
The header should show that the test was executed on a COMP30023 runner, and the version of the test script.
Running with gitlab-runner 13.11.0 (7f7a4bb0)
on comp30023 -primary oyhRRUn … $ /test.sh COMP30023 2022 Project 1 Before Deadline Tests v1, last modified 09/
5.2 Commit Log
Next is the commit log. The top commit is the one thats being tested.
Commit log: 8ec56c2f8ced6bbcabb6a946a24ef50d3f9d9278: Remove strip. 9544fc6320a3c26672147d31f8112c8e1315b90c: feat: -e optional parameter
5.3 Compilation
The compilation process will then be shown. If youre missing marks for build quality, please look to this section.
Common issues:
- make cleanis not implemented, or fails with non-0 status code
- There are dirty files committed to the repository, ormake cleanis non-functional
- Runningmakedoes not produce the required executable
- Code files were marked with executable bit, usechmod -x
to remove
make -B && make clean (output suppressed)
make clean rm -f detect *.o
make gcc -c detect.c -Wall -O gcc -c util.c -Wall -O gcc -c intslabarr.c -Wall -O gcc -o detect detect.o util.o intslabarr.o -O
OK — ./detect found
5.4 Test case execution
Task 1 1_0: Passed Task 1 1_1: Passed Task 2 2_5: Passed Task 2 2_6: Passed Task 2 2_8: Passed
Each test will:
- Come under a task in the assessment criteria
- Have a unique name (this will be reflect the names of sample test cases, if given) and perhaps a short description
- Have a weight
- Pass or fail Please see specifications on whether there is partial marks at the test case level.
- diffs or error messages may be provided
5.5 Results table
Finally, there is the results table (from another project).
=============== START RESULTS TABLE ==================== Task 1: Process/file stats. Task 2: Execution time 1. Task 3: Deadlock detection 1. Task 4: Process termination (1 deadlock) 1. Task 5: Process termination (>1 deadlock) 1. Task 6: Challenge 0 Task 7: Quality of software practices #CODE_QUALITY# Task 8: Build quality. Project 1 (Total): #TOTAL_MARKS# ================ END RESULTS TABLE =====================
Rows with##are manually marked or excluded from CI. This may include reports, challenge tasks, code quality etc.
Additionally, note that hidden cases are not included in the CI. Marks indicated are for visible cases only. As such, the maximum mark in each section on CI is half the total allocation.
6 Conclusion
This concludes our brief introductory guide to COMP30023 projects.
Please let us know if you find any issues with our infrastructure and feel free to ask for clarification on any confusing aspects.