report代做 | arm代做 | Data structure | 代写Algorithm | scheme | lab – Coursework 1: Chaining

Coursework 1: Chaining

report代做 | arm代做 | Data structure | 代写Algorithm | scheme | lab – 这是利用report进行训练的代写, 对report的流程进行训练解析, 包括了report/arm/Data structure/Algorithm/scheme等方面, 这个项目是lab代写的代写题目

scheme代写 代做scheme

Academic Session: 2020-

Coursework 1: Chaining

Giles Reger

Before we begin a few points:

  • Submission is via git for this lab – see the section at the end of this document about submission. You should use yourcomp24412repository (branchlab1) as a starting point. On some machines you may need to run
git checkout origin/lab
git checkout -b lab
git branch --set-upstream-to=origin/lab1 lab
git pull
if just runninggit checkout lab1doesnt work.
  • The exercises contain a lot of text as they contain a lot of explanation.
  • The exercises are designed such that about 80% of the marks can be gained by following the instructions and explaining your understanding, with the remaining 20% being more creative. However, I would expect the last 20% to take at least as long as the first 80%, if not longer.

Learning Objectives

At the end of this lab you should be able to:

  1. Explain the behaviour of forward and backward chaining
  2. Demonstrate the matching and unification algorithms
  3. Implement optimisations to the forward and/or backward chaining approaches

Building Reasoning Tools with Chaining

The aim of this coursework is to get some insight into how to build reasoning tools. It focuses on the relatively simple forward and backward chaining reasoning approaches. I have provided minimal working versions for you to explore and improve.

You should make use of theforwardreasoner.pyandbackwardreasoner.pyPython3 programs. To run these you need to have the Antlr4 Python3 runtime installed, which can be done viapipe.g. by running python3 -m pip install antlr4-python3-runtime.

I have written these programs to try and emulate what you have seen in lectures. I have tested them a reasonable amount. If you find bugs please report them to me (Giles).

The forward reasoner will compute the closure of the knowledge base and then answer queries. The backward reasoner will prompt for queries and try and answer each query (like in Prolog).

I have defined aRuleLanguagethat matches closely with what was introduced in lectures. The only element that may be surprising is the way disequality (e.g.X != Y) is treated. Neither chaining Algorithm supports carrying around information on what should not unify. Therefore, a disequality as a guard in a rule will only check whether the current values for the relative terms unify or not.

Feel free to edit any of the code to add extra printing or improvements, indeed the last exercise in this part requires you to do just that. The parser is quite basic and it hasnt got any nice error reporting features. It also fails to support some things you might want e.g. it wont allow blank new lines in knowledge bases. If you want to edit the input language itself then you will need to install Antlr4 and run the commandantlr -Dlanguage=Python3 RuleLanguage.g4^1.

W arm Up

Before you begin the actual exercises lets make sure everything is running fine. As a warmup try and write out your own Datalog knowledge base (which you created in Lecture 4) and run it through the forward reasoning tool. The syntax is a bit different to what we saw in lectures as were using a comma for conjunction. The Datalog knowledge base I built in lecture 4 is

war(napoleonic) war(crimean) war(firstWorldWar) battle(waterloo,napoleonic) battle(balaclava,crimean) battle(somme,firstWorldWar) event(chargeOfTheLightBrigade,balaclava) after(crimean, napoleonic) after(firstWorldWar,crimean) after(X,Y), partOf(Z,X) => after(Z,Y) after(X,Y), partOf(Z,Y) => after(X,Z) after(X,Y), after(Y,Z) => after(X,Z) battle(X,Y) => partOf(X,Y) event(X,Y) => partOf(X,Y) partOf(X,Y), partOf(Y,Z) => partOf(X,Z)

in this syntax. Write your knowledge base in a file calledmykband run

python3 mykb

(^1) See more hints on using Antlr4.

At this point it might give some (often slightly unhelpful) syntax errors. Once youve fixed those (ask on Piazza or in labs if you need help) youll be able to see the computed closure.

Now have a play. You can type in queries orexit. See what happens. For my knowledge base Ill try these queries:

  • war(waterloo)
  • after(waterloo,napoleonic)
  • after(somme,waterloo)
  • partOf(X,crimean)
  • after(X,balaclava)

Now lets take a quick look at the code that makes this work.

Open upforward_reasoner.pyand see if you can identify which bit of the code is handling the queries. What is theis_datalogfunction doing?

Open upForward.pyand look at theforward_datalogfunction, can you relate this to what you saw in lectures?

Open upMatching.pyand look atmatchandmatch_body- do they do what you expect? Can you already see opportunities for optimisation?

If you have any questions at this point feel free to ask in Piazza etc but you should be warmed up enough to start the exercises.

Exercise 1: Exploring Forward and Backward Reasoning

In this exercise you should use the provided knowledge bases (in thekbdirectory) and write your observations in a fileexercise1.txt. This exercise asks a number of questions but you will be marked on the depth of the understanding displayed in your answers; you do not need to answer all questions to get full marks.

Therich knowledge base. This is a very basic knowledge base (sometimes used as an example in lectures). Use both reasoners to find out if giles is happy (e.g. if the knowledge bases entailhappy(giles)). For forward chaining you should run

python3 kb/rich

and then enter the queryhappy(giles)when prompted. For backward chaining you should run

python3 kb/rich

and do the same. Once youve concluded whetherhappy(giles)holds now ask who is happy, how do you phrase this as an entailment query? Remember to write down any observations/answers.

Thedatalogkbknowledge base. This is one of the knowledge bases used as an example to demonstrate forward chaining. How many big loops of the forward chaining algorithm does it take to compute the knowledge bases closure? How many additional facts are entailed by the knowledge base? You can run

python3 kb/datalog_kb 1

where the 1 indicates the verbosity level e.g. how much the reasoner should print. What does the forward reasoner do at level 1? Now use the backward reasoner to ask what you have e.g. ask the queryhas(you,X). Carry on asking for queries and note what happens, can you explain the behaviour?

Thefamilyknowledge base. This knowledge base captures some facts and relations from the Greek
Gods Family in Greek Mythology. This knowledge base is quite big but definitely not very big. Take a look
at it before you run the reasoners. There are three kinds of initial (extensional) facts and a set of rules
defining family relations. Note that theancestorrelation is transitive. Now run the forward and backward
reasoners. How long do they take? Note that you might have to wait a while (a couple of minutes on slower
machines). Check that both reasoners give the same answer forancestor(hera,X)e.g. the descendants of
the goddess Hera  how many are there? Now ask both reasoners for the siblings of Ares, do both reasoners
give the same answers? If not, why not  what are the different reasoners doing? Note that you can run
the backward reasoner with verbosity level 1 to see the the search tree  it will print an answer whenever it
reaches a leaf of the and-or tree and backtrack when a node has been fully explored.
Thelistsknowledge base. This should also look a bit familiar. This knowledge base captures some
properties of lists but our language doesnt have an inbuilt notion of lists so functions are being used e.g. we
build up lists as
and so on. Before we run the forward reasoner on the knowledge base what do we expect from the rules? Is
this a Datalog knowledge base? How many facts are entailed by the knowledge base? Now run the reasoner
(use the 1 verbosity argument) and note what happens. Now use the backward reasoner to answer these
What does the reasoner do (currently) when there are no matches? Are all the answers what you expect?
If the answers are different from what you expect can you explain what is happening  and should it be
happening (e.g. is the reasoner doing the right thing or does it miss some cases).
Exercise 2: Making some Knowledge Bases
In this exercise you should create a set of new knowledge bases and describe them in a fileexercise2.txt.
The aim is to demonstrate properties of the reasoning approaches by their behaviour on certain knowledge
bases. Create knowledge bases (namedonetofive) as follows:
oneA Datalog knowledge base with the factsf(a,b),f(b,c),f(c,d), andf(e,f)that computes the
transitive, symmetric, and reflexive closure off- note the relationship between reflexivity, transitivity,
and symmetry. The closure should contain 20 facts.
twoA non-Datalog knowledge base with one fact and one rule that has at least 10 facts in its closure.
Ideally the closure will not be infinite.

threeA non-Datalog knowledge base giving non-terminating behaviour for forward chaining

fourA Datalog knowledge base and query giving non-terminating behaviour for backward chaining
fiveA knowledge base that causes backward chaining to give the same answer (i.e. with the same value for
X) to the queryhello(X)at least five times in a row. Ideally the knowledge base would not contain
the same fact more than once.

None of the above knowledge bases need to terminate/not-terminate in the cases not mentioned. Note that your knowledge bases should contain just the required facts and rules. Any comments, expla- nations, or requested queries should go in theexercise2.txtfile so that we can run your knowledge bases automatically.

Exercise 3: How does it Work?

In this exercise you will explore how the reasoners work. You will modify the programs and write any observations in a fileexercise3.txt

Firstly, take a look atUnify.pyand note that it contains a main method that runs some tests. Run the tests and see if they pass. Now add similar tests toMatching.pyfor all three functions (match,matchbody, andmatchWithFunctions).

Secondly, take a look This contains a simple forward chaining algorithm for Datalog knowl- edge bases and a more general one for general knowledge bases in the Prolog fragment (containing non-ground facts and function symbols). What is the main difference between the two algorithms? One of the differences is that in the general algorithm we store facts innormalisedform by renaming all of the variables. Why do we do this? It might help to consider the knowledge base

isalist(emptylist) isalist(list(X,emptylist)) isalist(list(X,list(Y,emptylist))) isalist(L) => longer(list(X,L),L)

where you try turning off this normalisation (changenormal = name_apart(f,f.get_vars())to just normal = fand similarly fornormal_new_fact. Put it back after youve answered the question.

Thirdly, lets try and understand the substitution being built up by the backward reasoner. Save this knowledge base

fact(f(g(a)),g(a),h(a,X)) thing(h(a,b)) thing(h(a,a)) fact(X,Y,Z), X=f(Y), thing(Z), Y = g(W), Z = h(W,W), f(U) != W => anotherFact(W)

and run the backward reasoner using verbosity level 1. Now ask the queryanotherFact(X)and extract the series of substitutions that were built up to answer the query. How does the reasoner get from the final substitution to the answer? What happens to the substitution when the reasoner backtracks? Now if you run with verbosity level 5 you will see all of the unification attempts. Do any of these fail? If so, why?

Exercise 4: Improve the Reasoners

In this exercise you should improve the functionality, usability, and/or efficiency of the reasoners. This is a rather open-ended exercise and the work-to-reward ratio for full marks is high (e.g. the amount of work required for that last mark is much higher here than in other exercises). If you have struggled with the previous exercises consider moving on to Part 2. Also, watch out – hacking the reasoners can be a bit addictive, keep an eye on how much time youre spending on this exercise. Below are some ideas for improve- ments. You definitely shouldnt aim to do all of them. Briefly explain your improvements inexercise4.txt.


  • (Easier) Make forward chaining goal-directed^2 by checking the query against produced facts and stop- ping on the first one and asking if we should continue as in the backward reasoner.

(^2) This now means you will have to rerun for every goal, perhaps that is less efficient – it depends how many queries you plan to ask. If you really want to you could update it to take a series of queries to search for at the same time.

  • Store the set of facts in a Data structure that supports matching and/or unification operations (e.g. a term index). If you have a perfect index then this might completely replace the match/unify function. Or you may implement an imperfect index (hint: this is easier) where you still use these functions to filter the result. The most naive index would just filter facts by their name.
  • Make forward chaining incremental such that it uses the new facts to find which rules to trigger on the next step. Getting this as efficient as possible can be tricky e.g. consider the casef(X), g(X) => h(X) where you have generated a newg(a). Note that the meaning of the knowledge base doesnt change if you reorder the premises. You may (or may not) want to have more than one set of facts.
  • Reorder the body of a rule in forward chaining to make matching the body more efficient. Beware
    • deciding the optimal order is an NP-complete problem as it is reducable to graph colouring. You should consider heuristics. For example, for the rulef(X), g(X) => h(X)if there are 1000 instances offinFand just 1 instance ofgit might be better go checkgfirst!
  • Backward chaining can repeat a lot of work by revisiting subgoals e.g. consider the knowledge base
f(X), g(Y), h(Z) => k(X,Y,Z)
with the queryk(A,B,C). It repeats a lot of work when generating each answer (try running it with
verbosity 1). A better solution would be a hybrid solution between forward and backward reasoning
where each subgoal is fully answered and the answers remembered for later. To do this you would need
to create a new search function that generated a set of substitutions from a search call and combined
the results.
  • Non-termination in backward chaining often follows the same pattern e.g. applying the same rule to itself as in
path(X,Y), link(Y,Z) => path(X,Z)
but logically this is equivalent to
link(Y,Z), path(X,Y) => path(X,Z)
which might not be non-terminating. In this case non-termination can be detected as a sequence of
goals such as
>BS with [path(X,Y)],{}
>>BS with [path(Var0,Var1), link(Var1,Var2)]
>>>BS with [path(Var3,Var4), link(Var4,Var5), link(Var1,Var2)]
>>>>BS with [path(Var6,Var7), link(Var7,Var8), link(Var4,Var5), link(Var1,Var2)]
>>>>> ....
Can we apply heuristics to detect loops such as this and terminate that branch? Can we statically
detect that this rule will cause this behaviour and reorder the premises of the body?
Alternatively, can you think of another way of organising search to avoid the non-termination problem?


  • (Easier) Currently the backward reasoner wont print False if there are no matches, it just prints nothing. Update it so that it prints False when no matches are found.
  • (Easier) Currently the backward reasoner just works on a single query fact but if we had a conjunction (list) of queries we could just add them as a list of goals. Update the reasoner to work with this.
  • Add thediffpredicate to the language, or make the disequality predicate (X !=Y) act like this. To achieve this you will need to carry around an anti-substitution as well as a substitution that captures things that should not unify. To test your solution try the simple knowledge base
X != Y, fact(X), fact(Y) => one(X,Y)
fact(X), fact(Y), X != Y => two(X,Y)
  • Anything else you can think of! Feel free to check whether it would count as a non-trivial optimisation.
  • Adding non-trivial arithmetic might be a lot of work, but some simple numbers and counting probably wouldnt be (it would require some Antlr hacking).

Submission and Mark Scheme

Submission is viagit. You must work on thelab1branch andtagyour submission aslab1solution. The providedsubmit.shscript does this for you. Note that you must push your tagged commit to Gitlab before the deadline (it is not good enough just to make the commit locally).

Your work will be marked offline and returned to you with comments.

The marking scheme is as follows.

There are 20 marks available in total:

Ex 1 (6 marks) – 1.5 marks for demonstrating depth of understanding for each knowledge base; 0.75 marks for some demonstration of understanding.

Ex 2 (5 marks) – 1 mark for each knowledge base correct and well-described; 0.5 marks if description unclear but still correct.

Ex 3 (4.5 marks) – 1.5 marks for each exploration done well, 0.75 marks if done reasonably.

Ex 4 (4.5 marks) – Full marks for something truly impressive, 4 marks for something complete that includes multiple optimisations (not all Easier) and works well, 3 marks for something that is not Easier and works well, 2 marks for something non-trivial (a few of the Easier optimisations or one non-Easier optimisation) that works (but not necessarily optimally), 1 mark for one of the Easier optimisations (that works).