aws | c#代做 | oop | 代写project | unity代做 | assignment – Project 1: Ray Tracer

Project 1: Ray Tracer

aws | c#代做 | oop | 代写project | unity代做 | assignment – 这是利用c#进行训练的代写, 对c#的流程进行训练解析, 是比较典型的aws/c#/oop/unity等代写方向, 该题目是值得借鉴的assignment代写的题目

OOP代写 代写oop

COMP30019 – Graphics and Interaction
Semester 2, 2022

This project isindividualwork (30 marks).

Due: 4th September 2022, 23:59 AEST

assignment Brief

You are tasked with building aray tracer. Your ray tracer will output a single static PNG image, based on an input scene file and command line arguments. We have provided you with a template c# implementation that you will need to complete.We are not using the unity engine in this project, however, you may find that some of the theory in this assignment will be transferable to Unity development (particularly the maths). The assignment is broken down into numerous stages and steps. Our expectations for each stage and the respectively allocated marks are outlined in detail below. You should aim to complete each step in sequence since this will make the process less overwhelming. There are various approaches to modelling how light interacts with surfaces in a scene. Almost always, the choice of approach comes down to a trade-off between computational complexity and realism. A ray tracing based approach can produce very realistic images, however this comes with a significant computational cost that generally makes it unsuit- able for real-time rendering. Even if there are no real-time rendering requirements, we still have to approximate and optimise the ray tracing process, since simulatingallrays in a scene is computationally intractable.

Template code

You will be given a GitHub repository to work on your project that is already pre- initialised with the template code. This is aprivaterepository, so you may commit/push to it without worry of other students having access to your work. You are expected to use GitHub from the start through to the end of the project, and should commit and push frequently.We wont acceptsubmissions not hosted in your private repository.

A link to accept the assignment and automatically create your template
repository is provided on the Canvas project page (where you found this
specification document). Note that you may submit the assignment as many
times as you wish  only thelatestwill be marked.
Stage 1 – Basic ray tracer (9 marks)

You will first implement thebasicfunctionality of a ray tracer. At its core, ray tracing is an application of geometry and basic linear algebra (vector maths will become your bread and butter!). For example, a ray of light can be modelled by two three-dimensional vectors: a startingpositionanddirection. Surfaces, light sources, and other entities in the environment can also be defined using vectors. Using geometry, it is possible to calculate how a rayreflectsoff a surface, or perhaps evenrefractsthrough it. Ultimately we are interested in simulating rays of light propagating throughout the environment, interacting with various surfaces, before finally reaching the viewer as pixels on their screen. If we are clever in utilising real-life physical models for these interactions, we can generate incredibly realistic scenes. In this first stage you will implement some basic vector functionality, and figure out how to shoot a ray for each pixel in a rendered image. We wont yet be worrying about materials, lighting, shading, etc. Such fancy stuff will come later in the assignment.

Stage 1.1 – Familiarise yourself with the template

Before writing any code, try to understand how the template provided to you works. We have already taken care of quite a few details for you, such as input and output handling. A sample input scene is provided to you in a text file (tests/samplescene1.txt), and a parser for this file has been written so you can access objects and resources directly within theSceneclass (src/scene/Scene.cs). The core ray tracing logic (which you will write) should be implemented inside theRender()method inScene.cs. This method takes anImageobject for which you can set the individual colour of each pixel, as well as derive properties such as its width and height. When the program is run, this image will automatically be outputted as a PNG image file. Try running the project so that you can see this in action. Open up the terminal in Visual Studio Code (or your preferred environment), and run^1 :

dotnet run -- -f tests/sample_scene_1.txt -o output.png

Although this looks like a bit of a mouthful at first, all it is doing is running the project with two command line arguments: an input text file (-f) and an output image file (-o). The input file will be read and parsed, and the output image written accordingly. Open the generated output file, and you will notice the entire image is black, since no ray tracing has been implemented yet. Before continuing, test your understanding by modifying the project code to output the image entirely in white instead.

Hint: Try using some loops inside theRender()method. TheImageclass
hasWidthandHeightproperties which should be handy for determining
the l oop bounds. These properties are already determined by the command
line arguments-wand-h, if specified.

(^1) Note that you may need to install the .NET SDK if you havent already, otherwisedotnet runwont be available. Make sure you installversion 6.0. You can find it athttps://aka.ms/dotnet-download.

Now take a look at the mainProgram.csfile. In theOptionsConfclass, you can see all of the potential command line arguments and their default values (these are the values used if that argument is not specified at runtime e.g., not entered on the command line). Dont change these default values, instead, pass values using the appropriate flags on the command line, if you want to change parameters. At this point its worth stressing thatyou should not modify theProgram.csfile at all. Doing so risks our automated test suites breaking when running your project during marking (see the Submission section for details).

Stage 1.2 – Implement vector mathematics

We have provided you with a C# struct template for representing a three-dimensional vector (src/math/Vector3.cs). Write code to complete the missing operations which are currently empty methods. Note that for convenience we have overloaded operators^2 such as+, *, /. This is a handy language feature that allows us to perform vector arithmetic concisely:

Vector3 a = new Vector3(0, 1, 0);
Vector3 b = new Vector3(1, 1, 0);
Vector3 c = a + b; // We overloaded + so c = (1, 2, 0)

As well as basic arithmetic, you will also need to complete functions to compute the dot product and cross product (at least). The dot product will tell you how much two vectors point in the same direction, and the cross product of two vectors will give you the vector which is perpendicular to both of them (e.g., crossing the x- and y-axes would give you the z-axis).

.

It isstronglyrecommended that you test your implementations here thor-
oughly. Vectors are utilised everywhere in this project, so a mistake here
can lead to a major headache down the line.

Stage 1.3 – Fire a ray for each pixel

We have already provided you with a ray structure (src/math/Ray.cs). Notice that it is simply a position (origin) and a direction, both represented as vectors. While it is possible to trace rays forwards from light sources in the scene, it is far more efficient to trace rays backwards from the camera. This is because most rays in the scene will never be seen by the viewer, and computing these would be a waste of resources. Inside theRender()method, write code to iterate through each pixel and construct a corresponding ray that fires into the world. The biggest challenge here will be converting from a two-dimensional pixel coordinate to a three-dimensional ray. You might find it useful to consult external resources about the maths here.

(^2) https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/ operator-overloading

In this project we want you to use a left-handed coordinate system. The camera should be situated at the origin of the scene (0, 0, 0) looking forward along the positive z-axis (into the screen), with the positive x-axis pointing right, and the positive y-axis pointing up. You should ensure that there is a horizontal field-of-view (FOV) of 60. As a sanity check, a ray at the very center of the rendered image should point in the direction (0, 0, 1). Rays at the corners of the image should have directions (i,j, k), wherei=jif the image is square. You should ensure that your solution works when different output image widths/heights are specified. For non-square images the vertical FOV should vary to maintain the correct aspect ratio.

Stage 1.4 – Calculate rayentity intersections

In this project a scene can contain three types of primitive entities planes, trian- gles and spheres. If you havent already, open the template classes provided in the src/primitivesfolder:

  • Plane.cs Represented by a point (center), and a vector representing the direc- tion it faces (normal- i.e., perpendicular to the actual surface of the plane). Note this defines an infinite plane.
  • Triangle.cs Represented by three points (v0, v1, v2). A clockwise winding order defines the front face of the triangle.
  • Sphere.cs Represented by a point (center) and aradius.

All of these classes implement the interfaceSceneEntity. This means they all contain a method calledIntersect(), which takes aRayas its input and returns aRayHitas its output. The returnedRayHitstructure contains important information used during ray tracing: theincidentray direction, thepositionof the hit, and thenormalof the surface at that position. It is also possible that there is no hit at all, in which case, nullshould be returned instead of aRayHitinstance. Your job is to implement the Intersect()method for all three primitive entities. Again, you may find it helpful to research common mathematical approaches to these problems if you are stuck.

Stage 1.5 – Output primitives as solid colours

You are finally ready to generate some graphical output! Earlier you computed a ray for each pixel in the image. Extend this code to check for intersections with primitives in the scene. You will need to make further additions to theRender()method. For each ray, iterate through every entity in the scene and check whether there is an intersection between the ray and the entity. If so, you should set the corresponding pixel colour to the colour of the object. Ensure you correctly handle cases where there is more than one entity that coincides with a ray. In case your object-oriented programming is rusty, here is a template for looping through all primitives/entities in the scene and checking ifrayintersects with them:

foreach (SceneEntity entity in this.entities)
{
RayHit hit = entity.Intersect(ray);
if (hit != null)
{
// We got a hit with this entity!
// The colour of the entity is entity.Material.Color
}
}

Note thatentityis an interface, so we dont know exactly which type of primitive it is (plane, triangle or sphere), but that does not matter since we are only interested in the intersection itself.

We have provided you with sample outputs in theimagesfolder, so you have
an indication of how your output should look for stages 1 and 2 respectively.
Stage 2 – Lighting and materials (9 marks)

In this stage you will extend the ray tracer to handle lighting, and model different types ofmaterials. Some materials are more trivial to compute than others, and this complexity ultimately boils down to how light rays interact with them. Note that every entity is assigned amaterial. The material contains properties that allow us to calculate how light interacts with the entity for example, its colour, whether it is opaque, reflective, transparent, etc. Opensrc/core/Material.csto see our repre- sentation of a material in this project. Note that you already used theColorproperty in the previous stage.

Stage 2.1 – Diffuse materials

We will first consider the case where a ray coincides with adiffuse surface which is directly illuminated by a light source. When light hits an ideal diffuse material, it scat- ters uniformly in all directions. This means it is viewer-independent, and the intensity only varies depending on the angle of incidence between the light source and the surface. Diffuse lighting is so trivial to compute that it is regularly used in real-time rendering techniques (not just ray tracing). In this stage you need to extend the ray tracer to handle materials with theDiffuse type. Objects should be smoothly lit when this is implemented correctly. As a starting point, take note of where you set the colour of a pixel currently. Instead of outputting the material colour directly, you should compute it based on the following function:

C= (NL)CmCl

…whereNis the normal of the surface at the hit point,Lis the direction to the light source from the hit point,Cmis the material colour,Clis the light colour andCis the

resultant output colour. Note that all light sources are available in theSceneclass, so you will likely have to iterate through these. You should sum the outputs of multiple light sources into the final pixel colour (if there is more than one).

Stage 2.2 – Shadow rays

Consider the fact that light rays may be blocked by objects in the scene. This should lead to visible shadows. Extend your implementation to check whether a hit point is in fact in a shadow. You can do this by firing another ray towards the light source from that point, and checking if it hits a (closer) surface along the way. If there is a hit, then that light source should not contribute to illumination at that point.

.

Be careful when firing a ray away from the surface of an object. Numerical
error could lead to a premature hit with that same object! One solution is
to offset the origin of the ray slightly away from the surface.

Stage 2.3 – Reflective materials

This is where ray tracing really starts to shine no pun intended! Extend the ray tracer to handle materials with theReflectivetype. When a ray hits a reflective material, another ray should berecursivelytraced to determine the colour at that point. To do this you need to calculate a reflection vector as a function of the hit points surface normal and the incident ray direction. This should bepure reflection the colour of the material plays no role in the calculations. Note that if there are a lot of reflective surfaces in a scene, computational costs can blow out significantly, so you may wish to place a hard limit on the depth of recursion (i.e., how many new reflection rays you are willing to fire).

Stage 2.4 – Refractive materials

Some materials are transparent, and allow light totransmitthrough them. Glass is an example of such a material. Unfortunately, simulating this effect in a realistic manner is not as simple as allowing an incident ray to pass directly through the object. Indeed, you may have observed that light can bend through transparent mediums (take a look at any curved glass object). This phenomenon is known asrefraction. Extend the ray tracer once again to handle materials with theRefractivetype. In a similar way to how you handled reflection, upon a ray hitting a surface, you should recursively trace a ray through the object according to physical l aws of refraction. The colour of the material should not play any role in the calculations at this stage. Note that materials have an additionalRefractiveIndexproperty, which will come in handy here.

Stage 2.5 – The Fresnel effect

In the real world, refraction does not really occur in total isolation from reflection. When light hits a refractive surface, some proportion of it is reflected, while another proportion

is refracted (these proportions sum to 1 since energy is conserved). This proportion is not uniform for all rays which hit the surface. As a rays angle of incidence decreases, there is greater reflection versus refraction. If you look at a sheet of glass from front on, and you will see that most of the light is refracted (transmits through). However, if you look at it almost side-on, it looks a lot more reflective! This phenomenon is known as theFresnel effect. Your next task is to improve re- fractive materials so that reflection ismixedinto the corresponding lighting calculations according to the Fresnel equations. Note that this means thattworays need to be traced for every one ray that coincides with a refractive material. If this process repeats itself multiple times, the computational burden increases exponentially, so keep this in mind when coding your solution.

Stage 2.6 – Anti-aliasing

You may have noticed that the images being produced so far contain somewhat jagged edges. This is because details in the scene can differ at the sub-pixel level when they are projected onto the final image. This is a common problem in computer graphics generally, and is calledaliasing. Aliasing is usually quite visible where there are curved edges, or edges that are not aligned horizontally or vertically with the screen (think about why). We can use various techniques to mitigate this problem, and this process is calledanti-aliasing. Modify your ray tracer to incorporate optional anti-aliasing during rendering. You should do this by firing more rays per pixel and then averaging the outputs for the final colour. There is another command-line argument accessible within your program which specifies the anti-aliasing multiplier you should use when rendering the scene. Here is an example:

dotnet run -- -f tests/sample_scene.txt -o output.png -x 2

The argument-xspecifies this multiplier, which in this example is 2. This means you should fire twice as many rays both horizontally and vertically (4x rays per pixel). If the multiplier is 3, then you should fire three times as many rays in both directions (9x rays per pixel). And so on. Note that we have already parsed this command-line argument for you! It is accessible within theSceneclass asoptions.AAMultiplier, so you dont need to worry about how to read it into your program.

Stage 3 – Advanced add-ons (9 marks)

In this stage you are given the opportunity to implement some advanced add-on effects of your choosing. Some are more trivial to implement than others, and the allocated marks reflect their approximate difficulty and/or time commitment. In completing these questions to a high standard, we expect that you research various approaches, and make informed decisions to maximise the outcomes of the intended effects. You should write some detailed comments in yourREADME.mdfile which describe the approach you have taken. It is not possible to receive more than the allocated marks for this section ( 9

marksmaximum), so if you complete more add-ons than required, clearly state which ones you want to contribute to your mark! N.B.: Regardless of whether you complete this stage or not, you are still encouraged to show off your work by submitting a custom scene (see below).

Render quality

For a few of these add-ons, it may be useful to have a renderqualityparameter, which allows the user to trade between computation time and output image quality. We have provided an additional command line argument-qthat you are free to utilise for this. For add-onswhere its relevant, our test suite may increment the quality level and re-render a test output (starting at the default quality level of 0). This will be repeated until some reasonable time limit is reached, or the image does not change. The exact details of this process will depend on available resources during marking, however all submissions will be tested in the same environment, and with the same time limits, in order to ensure fairness.

You do nothave toutilise the-qparameter. Its there purely so you have
the opportunity to show what your ray tracer is capable of without having
to be ultra-conservative with respect to time. It only applies toselectstage
3 add-ons, and wont be used elsewhere during testing. It is up to you to
determine if it is relevant to the problem you are solving.

Stage 3, Option A – Emissive materials (+6)

Up until this point we considered lights to be infinitely small points. This is an approx- imation, and not how lighting typically works in the real world. Consider a fluorescent bulb which is a long cylindrical shape. It would be inappropriate to model this using a singular point. Even a standard light globe, which comes close to being modelled by a point, is still a physical object with asurface areathat emits light. We call such surfacesemissive. For this add-on you need to extend the ray tracer to handle emissive materials. As a consequence there should also be soft shadows present in the scene (have a think about why). Note that theMaterialTypeenum has anEmissivetype, which will be used to specify whether the material should be emissive. The colour of the material can be interpreted to be the colour of the emitted rays. You may assume only spheres and triangles will be given emissive materials (not infinite planes).

Stage 3, Option B – Ambient lighting/occlusion (+6)

The current ray tracer only simulates a very small subset of rays in a scene. Consider how the illumination of a point on a diffuse object is currently calculated. We test if there is a direct ray originating from a light source. If there isnt, or there is an object in-between, then the point isnt illuminated at all. However, in reality, there may still be some illumination that comes from indirect rays of light (e.g. bouncing off

other surfaces in the scene). This is calledambient lighting. When ambient lighting is computed, we consequently computeambient occlusionin the scene, since indirect light will not illuminate all surfaces equally. For example, surfaces inside crevices and cracks tend to be a lot darker since they are less exposed to the other surfaces in the scene. Extend the ray tracer so that it is possible to optionally compute ambient lighting in a rendered image. You should utilise the-lcommand line flag which is available as options.AmbientLightingEnabledin theSceneclass. In other words, if and only if -lis passed to your program, ambient lighting should be enabled.

It is infeasible to computeeveryray in a scene, so you may end up using
some sort of randomised or Monte Carlo method to assist with this.

Stage 3, Option C – OBJ models (+6)

Extend the ray tracer to read simple 3D models in the form of .obj files. Research how the .obj file format is structured. We only expect that you handle files with vertex, normal and face definitions (v,vn,f), and faces may be assumed to be triangles (exactly three vertices). Be sure to consider how more complex models could impact theperformanceof your ray tracer, and optimise relevant data structures accordingly. If you do implement optimisations, ensure these are discussed in yourREADME.md. We have provided you with a template class in thesrc/extensions folder called ObjModel.cs. In the same manner as the sphere, plane and triangle primitives defined earlier, ObjModelhas anIntersect() method that needs to be implemented. You should read/parse the .obj file in the constructor, and use this data when computing intersections. We have provided you with a sample scene (tests/samplesceneobj.txt) that con- tains an OBJ, so you should examine this file first to see exactly how OBJ models are defined. In particular, observe there is a string that is the path to the .obj file itself. Also notice that we specify anoffsetvector andscaleas parameters when loading a model. These parameters allow us to adjust the size and position of the model to fit the scene appropriately. The scale should be applied first,thenthe offset, as these are order dependent operations.

Stage 3, Option D – Glossy materials (+3)

At the moment we can model a few common materials, but there are still many ma- terial types which cannot be adequately simulated. Add support for glossy materials to your ray tracer. TheMaterialTypeenum has aGlossytype, which you should add functionality for. Unlike refractive materials, no light should transmit through the ob- ject. However, the object should not be completely reflective either. Instead, the object should be coloured based on the materialsColorproperty, but also appear to reflect some light. Exactly how you achieve this effect is up to you, but you should aim to make it convincing. If there are parameters that need to be tuned to determine the amount of glossiness, choose suitable defaults that demonstrate the effect.

Stage 3, Option E – Custom camera orientation (+3)

Currently the camera is assumed to be at the origin (0, 0, 0) looking along the z-axis (with the y-axis oriented up). Allow the user to apply a custom camera position and orientation to the camera based on the following parameters: a position vector (x, y, z), an axis of rotation vector (x, y, z) and an angle (in degrees). Note that this is an axis of rotation,notthe direction or local z-axis of the camera. The axis and angle are specified via the following command-line parameters where#denotes a floating point number:

  • –cam-pos #,#,# Available asoptions.CameraPosition
  • –cam-axis #,#,# Available asoptions.CameraAxis
  • –cam-angle # Available asoptions.CameraAngle(in degrees)

These parameters are automatically parsed for you by the template code, and so all you need to do is utilise the respective variables within the Sceneclass (e.g., as options.CameraPosition).

Stage 3, Option F – Beers law (+3)

We currently consider transmission of light through a medium to be unaffected by that medium, other than its refractive index. Beers lawstates that light intensity should drop as light passes through a medium, and that drop is proportional to the distance traveled. Additionally, some wavelengths of light may be absorbed faster than others, giving rise to coloured semi-transparent objects. Modify the ray tracer to obey Beers law forRefractivematerials. Currently the colour of refractive materials has no effect in lighting calculations, but now it should be utilised. You should interpret this field to contain theabsorbanceof each colour channel. For example the RGB value of (4, 1, 1) means that red light will be absorbed at a faster rate than blue and green light.

Stage 3, Option G – Depth of field (+3)

The camera is currently modelled assuming an infinitely small pinhole aperture. If you think about it, a physical camera cannot function like this, since no light can pass through an infinitely small hole! We may introduce additional parameters to our ray tracer that allow us to more accurately model how a real camera works. This allows us to generate a depth of field effect in our rendered images, which means some parts of the scene are in focus, and other parts are blurry depending on a given focal length. Extend your ray tracer to account for two additional parameters: afocal length, and anaperture radius. These will be passed via the following command-line parameters where#denotes a floating point number:

  • –aperture-radius # Available asoptions.ApertureRadius
  • –focal-length # Available asoptions.FocalLength

We have already parsed these parameters for you and you can access them in the Sceneclass (e.g. as options.FocalLength). The aperture radius parameter has a default value of 0, which reflects the idealised pinhole model we have used up until this point. However, if this value is larger than 0, then a circular pinhole should be simulated with the corresponding radius. Surfaces at a distance equal to the focal length (away from the camera) should be in focus.

Finally – Show off your work! (3 marks)

Congrats! At this point, your ray tracer should be able to generate some pretty impres- sive images. Now is your chance to show off what it can do. Thetests/finalscene.txt file is currently empty. You should populate this scene in order to demonstrate all the core features of the ray tracer, as well as your chosen add-ons. Choose suitable settings to render your final image, and replace theimages/finalscene.pngplaceholder image with it. Notice that this image is already embedded in theREADME.md. Also record how long it takes to render the image on your PC and include that in theREADME.md(see the template provided). In case we need to verify your image, you must also include the exact command line used to generate it. We might use lower resolution/anti-aliasing/quality settings to speed up the verification process.

.

Verify that your image is visible on GitHub (instead of the placeholder
image) to ensure that you receive credit for it.

Assessment

Unless explicitly stated, grading of your submission will be exclusively based on outputs produced by your ray tracer,notyour code. We will use an automated process to pass hidden test scenes and/or specific command line arguments to your program. Failing tests wont receive credit! Therefore, you must ensure that you do not change the inputs or outputs of the provided template. Furthermore, dont rely on just the provided test scenes think about possible edge cases andwrite your own! We will run your ray-tracer with both the provided test scene, and some hidden test scenes. Ray tracers are resource intensive by nature, so we dont expect test scenes to be rendered instantly. However, you should take necessary steps to optimise the process where it is sensible. For practical reasons, time limits will apply, but they will be cho- sen based on our sample solution to very comfortably allow your project to generate correct output images if implemented reasonably. Occasionally the time taken to gener- ate images may be part of your assessment (e.g., if specific optimisations are applicable to an add-on you have implemented). You are encouraged to discuss any time related dilemmas you faced in yourREADME.md. Finally, feel free to ask questions about the specification on the discussion forum (Ed). However, before doing so, please refer to our live FAQ which will be pinned for

the duration of the project. This is based on questions from previous years, and will be updated further as new questions roll in!

Submission

You must submit the project via the respective project details page on Canvas (where you found this specification). There is an embedded Gradescope interface on that page that allows you to link your GitHub account, as well as the private repository containing your work. You may re-submit your project as many times as you wish, and only the finalsubmission will be marked, so you arestrongly encouragedto attempt a test run of this process comfortably before the deadline. Your submission must conform to the exact scene and command line specification that is provided with the template, since our testing system is semi-automated. If you do not modify the core template code, then this should not be an issue. Additionally, external libraries, dependencies, or packages arenotallowed. Make it clear in your git README.mdwhich add-ons you wish to be marked, and briefly describe your approach for these. Remember, you should frequently commit your work to GitHub while working on the project. Even though you are working solo, this serves as a backup and proof of your work on the project.

Late submissions will incur a penalty of3 marksper day after the final
due date.Penalties also applywhen not using Git/GitHub correctly, not
referencing sources correctly, or not following instructions according to the
specification generally.

Academic honesty

Your submission must be your own work. Sharing code with other students is not allowed for this assignment, although you may discuss the project with classmates at a conceptual level. We will use a sophisticated code similarity checker between all student submissions, and previous years ones. You are welcome to utilise external resources for research purposes, however, you must attribute any sources you use both in your code and yourREADME.md. You should minimiseblindlycopying and paste code, even if referenced, and deductions may apply if there is little evidence of understanding due to an over-reliance on external resources. Feel free to clarify this with your tutor if you are unsure.