This document is a collection of exercises and commented source
code (demos). Each chapter corresponds to a single week of the
semester and hence to a single seminar (class). In the seminar in
the first week of the semester (the ‘zeroth’ seminar), you will get
a chance to acquaint yourself with the systems used in the course,
the study materials and basic tooling.
Each part of this collection (all demos and exercises in particular)
are also available as standalone files, which you can alter and
execute. We call this split version a source bundle. The most
recent version1 (in all available variants) can be obtained in these
two ways:
In the study materials of the subject in the IS – PDF and
HTML files in the directory text, the source bundle in
directories 00 (rules, organisation, technical information,
etc.), 01 through 12 (one directory per chapter = week of
the semester), s1 through s3 (assignment sets) and finally
the directory sol contains reference solutions. We recommend
that you fetch the files in batches using the ‘download as ZIP’
option.
When logged into the student server aisa (either using ssh
or putty), by issuing the command pv264 update. All the
above-mentioned directories will become available under
~/pv264.
This chapter (directory) further contains binding rules and
information. Before you go on with the course, please read it
carefully.
For communication with the course organising team, please use the
discussion forum in the IS (you can find more about it in part
T.1). Please do not e-mail the organisers nor tutors with regard to
this subject, unless you have been specifically directed to.
Please direct requests for exemptions, absence excuse notices, etc.
to the study department.
Study materials in this semester will be published as they become available – the course is being significantly reworked. Before you start working on practice exercises or assignment sets, please always make sure that you have an up-to-date version. Practice exercises may be considered final from Monday morning of the corresponding week. Assignment sets likewise in the morning of the first Monday in the corresponding block (i.e. first week practice 18.9.2023 0:00 and first assignment set 25.9.2023 0:00.
This subject consists of seminars (tutorials), assignment sets and a
final programming test (colloquium). Because it is a programming
course, most of the work in it – and also its grading – will be
focused on practical programming. It is important that you program
as much and as often as possible, ideally every day, but at least a
few times each week. To this end, this collection contains a large
selection of exercises (these are usually very small programs, with
single- or double-digit number of lines, of which you should be able
to write a couple within an hour) and assignment sets, which are
considerably more extensive (but still only a few hundred lines at
most). In both cases though, the difficulty will increase as time
goes on, so it is important to keep up – do not procrastinate until
the last moment.
Because programming is hard, this course is also going to be hard –
you absolutely need to put in adequate effort. Of course, we hope
that you will successfully complete it, and even more importantly,
that you learn as many new things as possible. Keep in mind that
despite its difficulty, this is only a small step within a long
journey.
The subject is split into 4 blocks (the fourth takes place during
the exam period). Each block in the semester contains 4 chapters
(topics) and the corresponding 4 seminars (classes).
The following sections of this chapter contain binding rules of
the course: we strongly recommend that you read them carefully.2
The remainder is split into chapters such that each chapter
corresponds to a single week of the semester. Important: in the
first week of the semester, you will be already required to solve
and submit practice exercises, even though first proper class is in
the second week of the semester. The zeroth seminar (class) is
optional, but won't earn you any points either.
The chapters are numbered according to the above table – in the
seminar in the second week of the semester, we will work on the
topic for which you will have already solved and submitted practice
exercises in the first week.
This course requires considerable activity on your part throughout
the semester. This section gives you an overview of important events
in the form of a calendar. The events are marked as follows (more
detailed information on each will follow in subsequent sections):
„#X“ – week of semester,
„cv0“ – zeroth seminar takes place in this week,
„cv1“ – seminar which goes with the first chapter,
„X/v“ – interim verity results for chapter X practice ex.,
„X/p“ – deadline for chapter X practice ex.,
„sX/Y“ – Y-th round of verity testing of assignment set X,
„sX/z₁“ – first round of review grades for set X,
„sX/op“ – deadline for remedial submissions of set X,
„sX/z₂“ – final review grades for set X,
„test“ – available programming test dates.
The most important events are highlighted: submission deadlines for
practice exercises and assignment sets (always 23:59 of the given
day).
To successfully complete the course, you need to obtain 60 points
in each block. There are no other requirements. There is a fair
number of ways to gain points (the exact rules are laid out in the
following sections of this chapter). In the first three blocks
(those within the semester), these are:
for each successful practice exercise 1 point (max. 6 points
each week, or 24/block),
for each practice exercise that also passes ‘verity’ test,
additional 0.5 points (max. 3 points each week, or 12/block),
class attendance³ is rewarded with 3 points (max. 12/block),
up to 3 points for activity in each seminar (max. 12/block).
For practice and seminars, therefore, the theoretical per-block
total adds up to 60 points. Additionally, you can obtain:
10 points for each exercise from the assignment sets
(60/block in total).
In blocks 2 through 4, you can additionally gain points for
submitting high-quality code in the previous blocks' assignment
sets:
at most 5 points per exercise for code quality (30/block).
Finally block 4, which takes place during the exam period, does not
include seminars nor an assignment set. However, besides code
quality points, you can get:
15 points for each exercise on the final programming test (for
a maximum of 90/block).
To sum things up, you need:
block 1: 60/120 points,
block 2: 60/150 points,
block 3: 60/150 points,
block 4: 60/120 points (does not apply to credit-only
enrolment).
¹ If you are enrolled only for credit (not colloquium), neither the
fourth block nor the final test apply to you. The requirement of
3×60 points during the semester of course still holds.
³ In case your absence is properly excused (and this fact is
recorded in the IS), or your seminar was cancelled (e.g. due to
a state holiday), the points normally awarded for attendance can
be obtained by attending with a different group in the same week.
If this is not possible, you may send 3 regular (r-type)
exercises from the relevant chapter to your tutor (however, you
first need to agree with them which specific exercises those are
going to be). Unexcused (voluntary) absence can only be
substituted by attending a different class in the same week, and
only once per semester.
As was already mentioned, if you wish to learn to program, you need
to spend considerable time actually programming. Moreover, this time
must be distributed throughout a longer time period – trying to cram
80 hours into the last week of the semester is absolutely not going
to work. For this reason, we ask you to solve and submit exercises
from this collection every week. This requirement has another
reason: in order to make the seminars interesting, you need to be
already familiar with the topics that will be discussed. Otherwise,
the seminar would be spent rehashing basics.
Finally, we also ask you that you solve every exercise you wish to
submit alone, without cooperating or discussing the solution with
anyone. Forbidden cooperation will be punished (see also the end
of this chapter).
Each exercise comes with a basic set of tests. Passing those tests
is the only criterion for gaining the base points for solving
practice exercises. After submitting, these same tests will be
automatically executed on your solution and their result will be
written into a notepad in the IS. The purpose of this is to prevent
cases, where you accidentally submit the wrong file, or an otherwise
unsatisfactory solution (and not learning about the mistake until
too late). We strongly recommend that you submit early, leaving
enough time to correct any unexpected discrepancies.
In addition to basic (sanity) tests, which run immediately, extended
(verity) tests will run at 23:59 on Thursday and again on Saturday
(right after submission closes).
For each solution that passed the basic tests (sanity), you obtain
one point. For a solution that additionally passes extended (verity)
tests, you gain further half a point (for a total of 1.5). All test
results are made available via IS notepads.
The most important part of this subject is clearly the programming
you will be doing on your own at home – programming is one of those
skills that you can only learn by doing. Group classes clearly
aren't a substitute for that, but they can still be useful in a
couple of ways:
we will analyse problems which you have encountered while
working on your own and discuss their solutions,
there will be space for some team work (with the tutor, in a
pair, in a larger group) and, among other things, get a feeling
how other people deal with programming problems.
The class is split into two approximately equal segments, each
addressing one of the above points. The first part works as follows:
the tutor picks those of your practice exercise solutions that
are interesting for one reason or another (positive or negative),
these solutions will be anonymously shown on the beamer and
their pros and cons will be discussed by the group,
your task is to actively take part in this discussion (you can
ask why is something right or wrong and how to do it better,
you can express your opinion, and of course you should answer
the questions raised by the tutor),
you may claim authorship and explain why you did things the way
you did them, or even dispute any criticism (of course, this is
entirely optional),
in a similar manner, the tutor will select peer reviews written
in the previous week and discuss them with the audience briefly
(the structure of the review, which comments are good and which
are not, whether some comments are missing, and so on) – again,
you are expected to take part,
upon a request, unsuccessful solutions can be likewise discussed
in a like manner (both practice exercises and exercises from sets
which are no longer active).
The second part of the class is more variable, but will always
relate to activity points (each week you can score at most 3
activity points).
In the fourth, eighth and twelfth weeks, midterm programming tests
will take place in this space – in those classes, you will work out
an assigned exercise from this collection, without the help of the
internet (just like at the final test); each successful solution
(one that passes the verity test) will be awarded those 3 activity
points for the week.
In the remaining weeks, the second segment will combine activities
based on r-type exercises from the corresponding chapter (which
particular exercises will be done is up to the tutor, but they may
of course consider your preferences):
You can volunteer to solve an exercises on the beamer,
where it is primarily your responsibility to figure out and
write down the solution, though the audience and/or the tutor
may help you along if you get stuck. You should also explain
what and why you are doing along the way, and answer any
questions the audience asks. With shorter exercises, you may be
also asked to improve test coverage as you go on.
The tutor may assign an exercises for you to solve in pairs –
the first pair to have a working solution then gets to show
their solution to the rest of the class, explain how and why it
works, answer audience questions and perhaps correct mistakes
that come to light. The 3 activity points will be evenly split
among the members of the winning team.
Finally, you may solve an exercise together as a group – the
tutor will write down the code that you come up with; in this
case, however, no activity points are awarded.
Each block comes with a set of 6 assignments which constitute a
significant chunk of available points. You can make up to 12
attempts at passing verity tests on each assignment, evenly
distributed throughout a 4-week period. Submissions will open at
0:00 of the first day of this period (24h before first verity run).
Submission deadlines (evaluation of verity tests) take place every
Monday, Wednesday and Friday at 23:59, according to the following
schedule:
Each assignment comes with a single source file (the skeleton). You
should write your solution into this file and then submit it, the
same as with practice exercises:
using homework vaultsN_name in the ISu (e.g. s1_a_life),
by running the command pv264 submit sN_name in ~/pv264/sN,
e.g. pv264 submit s1_a_life.
More detailed instructions can be again found in Chapter T.
The evaluation of your submissions is split into three phases, each
with its own set of tests:
„syntax“ – checks that the submitted program is syntactically
correct, can be compiled and passes basic static checks,
„sanity“ – ensures that submitted programs behave reasonably on
simple inputs – these tests are similar, in their style and
extent, to the tests bundled with other types of exercises,
„verity“ – thoroughly checks correctness of the solution,
including complex inputs, edge cases, and memory errors.
The phases are interdependent – if you fail one of them, the
subsequent phases won't even run (for a given submission, of
course). The key phase is the last one, verity, which you need to
pass in order to get any points at all. The timing of the phases is
as follows:
syntax is checked immediately (within 5 or so minutes of the
submission),
sanity is checked every 6 hours aligned to midnight (i.e. 0:00,
6:00, 12:00, 18:00),
verity is checked on Monday, Wednesday and Friday at 23:59
(see the table above).
Only the latest submission is evaluated and each submission is
evaluated at most once in each phase. The results are, as always, in
the notepad in the IS (each assignment in a separate pad). You can
also get them by running pv264 status.
Each assignment which passes verity (within the given deadline, of
course) counts for 10 points.
For a given successful assignment, you can additionally obtain up to
5 points for writing your code well. These points are counted in the
block during which they have been awarded, i.e. points from reviews
of set 1 count toward block 2.
The maximal proceeds per set:
set 1: 60 for correctness in block 1 + 30 for review in block 2,
set 2: 60 for correctness in block 2 + 30 for review in block 3,
set 3: 60 for correctness in block 3 + 30 for review in block 4
(in the exam period).
Automatic tests check the correctness of your programs (within the
limits of practicality – even the strictest tests cannot guarantee
that your program is completely correct). Correctness is, however,
not the only criterion for evaluating programs: it is just as
important that programs are readable. Programs, among other
things, serve to communicate ideas to other people – well written
and adequately commented code should tell the reader what problem it
solves, how does the solution work and on both counts, why.
It is hopefully clear that human-readability of a program can only
be assessed by a human: for this reason, each successfully solved
assignment will be subject to a review. This review will include a
grade, according to criteria laid out in Chapter Z. When grading,
this is how we apply the criteria:
a solution will be graded A if it clearly describes a solution of
the stated problem, is correctly decomposed, is coded without
avoidable repetition using appropriate abstraction, algorithms
and data structures,
grade B will be given to programs with significant shortcomings
in one, or non-negligible shortcomings in two of the areas
mentioned above, e.g.:
the program is correctly decomposed and does not repeat itself,
but uses an unsuitable algorithm or data structure and is
sometimes hard to follow,
uses an optimal algorithm and data structures, and is correctly
decomposed, but locally repeats the same code with minor
variations and sometimes uses misleading or incorrect
subroutine names, variables, etc.,
otherwise good program which uses a very bad algorithm, or
very poorly named variables or is a big monolithic blob of
code with no decomposition whatsoever.
a program will be graded X if you clearly state that you do not
want the program reviewed (in a comment at the top of the file,
e.g. ‘Please do not review.’),
grade C will be given to all other programs, especially those
that combine two or more substantial failings detailed above.
You will get reviews (and grades) the second Tuesday of the next
block. If you are graded B or C, you may resubmit an improved
solution to improve that grade:
you have a week to do this,
the improved program must not fail verity tests,
verity tests run with the same cadence as always, i.e. Monday,
Wednesday and Friday at 23:59.
If the reviewer is happy with your improved program, they will
update your grade.
set
regular sub.
grade
resubmission
grade
1
20.10.
31.10.
7.11.
14.11.
2
17.11.
28.11.
5.12.
12.12.
3
15.12.
2.1.
9.1.
16.1.
Each final grade will be rewarded according to this scale:
grade A carries a reward of 5 points,
grade B is worth 2 points,
grade X is neutral (no points),
grade C incurs a 1 point penalty (i.e. -1 point).
Points awarded for correctness (based on verity tests) are not
affected by reviews in any way.
Příklady, které se Vám nepodaří vyřešit kompletně (tzn. tak, aby na
nich uspěla kontrola „verity“) nebudeme hodnotit. Nicméně může
nastat situace, kdy byste potřebovali na „téměř hotové“ řešení
zpětnou vazbu, např. proto, že se Vám nepodařilo zjistit, proč
nefunguje.
Taková řešení můžou být předmětem společné analýzy ve cvičení,
v podobném duchu jako probíhá rozprava kolem odevzdaných příprav
(samozřejmě až poté, co pro danou sadu skončí odevzdávání). Máte-li
zájem takto rozebrat své řešení, domluvte se, ideálně s předstihem,
se svým cvičícím. To, že jste autorem, zůstává mezi cvičícím a Vámi
– Vaši spolužáci to nemusí vědět (ke kódu se samozřejmě můžete
v rámci debaty přihlásit, uznáte-li to za vhodné). Stejná pravidla
platí také pro nedořešené přípravy (musíte je ale odevzdat).
Tento mechanismus je omezen prostorem ve cvičení – nemůžeme zaručit,
že v případě velkého zájmu dojde na všechny (v takovém případě
cvičící vybere ta řešení, která bude považovat za přínosnější pro
skupinu – je tedy možné, že i když se na Vaše konkrétní řešení
nedostane, budete ve cvičení analyzovat podobný problém v řešení
někoho jiného).
One of the options for obtaining activity points is writing peer
reviews. The goal of this activity is to practice reading and
evaluating code written by others. However, you can only write
reviews for exercises that you have successfully completed yourself.
For example: if you submit 4 practice exercises in week 2, of which
3 pass verity tests (let's say p1, p2 and p5), in week 3, you
are assigned a single solution of each of these 3 exercises (i.e.
you will be writing one review for each of 02/p1, 02/p2 and
02/p5). The deadline for writing reviews for exercises from the
second week are the same as the deadline for submitting practice
exercises from the third week (Saturday midnight).
Writing those reviews is optional. For each review written, you will
get one activity point, counted in the week in which you wrote the
review (in the above example, that's the third week of the semester,
i.e. they share the ‘slot’ with the 02/r exercise series).
You will only get the points for review if your review has useful
content – it is not enough to write ‘I have nothing to say’ or
‘everything fine’ or ‘nothing to comment here’. If the solution is
good, explain why it is good (see also below). The reviews that
you submit will be read by your seminar tutor and some of them may
be discussed (the next week) in the class, in the same spirit as the
original solutions themselves.
NB: In any given week, it is only possible to obtain 3 points for
activity, regardless of how you get them (reviews, solving an
r-type exercise in the class, midterm test, and so on). If you get
to review more than 3 exercises, you can pick which ones to do, but
you can still only get 3 points.
How to fetch the code to review and how to submit the finished
reviews is detailed in Chapter T. Write your comments directly into
the files you are reviewing. Make your comments brief but useful –
your main goal is to help the recipient to improve their programming
skills.
Try to apply the criteria from Chapter Z (ideally refer to the
specific guidelines, e.g. ‘this variable could use a better name,
see guideline 2.b’). Do not hesitate to point out good code (again,
you can refer to the guidelines here, especially if you consider
part of the code a particularly good example of one) or note that
you have learned something while reading the code.
Always insert comments before the block you are commenting and try
to stick with the following style (using ** helps readers to
tell apart review comments from original author's comments):
/** A short, one-line remark. **/
For multi-line comments:
/** A longer comment, which should be wrapped to 80 columns or
** less, and where each line should start with the ** marker.
** It is okay to end the comment on the last line of text like
** this. **/
Do not change any existing lines (specifically make sure that any
automatic reformatting of code your editor might do is disabled).
The only permissible operations are:
insertion of new lines (empty or with a comment), or
The exam period makes up the virtual 4th block, which has the same
criteria for success as every other block – you need to collect 60
points. The test:
will take place in a lecture room with school computers,
which will have internet access disabled,
and you won't be able to bring your own materials either,
you will however get a copy of the official study materials:
this exercise collection (without reference solutions for type
e and type r exercises),
an offline copy of cppreference (sans fulltext search),
you may use a text editor or the development environment VS Code,
the g++ and clang compilers, clang-tidy, valgrind and
gdb.
You will have 4 hours to complete the test, which will consist of 6
exercises, graded using automated tests (just like exercises during
the semester), with the maximal total score of 90 points. The
grading of each exercise is binary (either you get all 15 points or
none at all). Code quality will not be reviewed, nor will the code
need to pass clang-tidy. The exercises will be of difficulty
comparable to exercises of types p/r/v in this collection.
During the test, you can submit your solutions at any time (there is
no specific limit on the number of submissions) and each time, you
will get syntax and sanity test results. Part of the test bundle is
a file called tokens.txt, in which you will find 4 codes. You can
copy each code into a comment in one of the exercises once – each
code will reveal a single verity result for a single exercise (the
one into which you pasted it). If you submit with the same code
again (in the same or another file), the code will be quietly
ignored.
The final test will only take place after conclusion of the review
period for the third assignment set. The planned dates4 are as
follows:
the second half of the seminar will take the form of a midterm
programming test, about 40 minutes in length. These tests will be
conducted under the same conditions as the final test described
above (and as such, will serve as a sort of a dry run). You
will however only solve a single exercise, for which you can the 3
points normally awarded for activity in the class.
You must work entirely on your own on every problem you wish to
submit – this applies both to practice exercises and to assignment
sets. This does not mean that you are forbidden from studying for
the subject with others: for this purpose, you can use all the
remaining exercises in this collection (i.e. those that none of you
are going to submit) and of course the countless programming
exercises you can find online.
The exercises which you submit serve to check that you in fact
understand the subject matter and can successfully apply it in
practice. This check is vital for your progress – it is extremely
easy to overestimate how well you understand something that you only
studied passively (by reading, listening to lectures or studying
solutions written by others). Unless you actually write a few
programs of a particular type, you are quite unlikely to actually
understand the problems at hand.
To dissuade you from working around those checks, we will penalise
any unsanctioned cooperation quite strictly. For each transgression,
you will lose (practice exercises from a single chapter count as a
single instance, each assignment from a set counts as a separate
instance):
1/2 of the points you have received (for all exercises in the
affected week, or for the affected assignment),
10 points in the affected block,
10 points (in addition to the previous 10) from the semester
total.
For instance, if you plagiarise 2 practice exercises in the second
week, and
your total score for practice exercises this week is 4.5 points,
your total score for the first block is 65 points,
you will automatically fail the subject (65 - 2.25 - 10 is less
than the 60 points required to pass the block). The same will happen
if you plagiarise an assignment from the first set (65 - 5 - 10),
etc. If you have sufficient points in the affected block (e.g. 80 -
5 - 10 > 60), you can continue the course, but you are also
penalised 10 points on the sum, which means you need 250 (for
colloquium) or 180 (for credit only) points to pass.
Whether you have worked on a solution together, or one of you solved
the exercise and made their solution available to others is
immaterial. All ‘versions’ of a solution derived from a common base
are going to be penalised equally. Likewise, making your solution
public will be treated like a violation and penalised accordingly,
even if nobody else submits a solution based on the published one.
Finally, please note that plagiarism checks are not time limited in
the same way that regular grading is. If an instance of plagiarism
is detected at the end of the semester or even during the exam
period, the penalties will be applied retroactively.
This chapter serves as review of what are essentially the
prerequisites of this course. Everything in here should already be
familiar. If there are constructs or techniques that you don't
understand, you should either look them up in the study materials
for PB1615 or use third-party material (books, online resources) to
catch up.
If you didn't read Chapter A (directory 00 in the source bundle)
yet, please do so now. More information about how to use the files
in this chapter can be found in 00/t3_sources.txt (Section T.3).
The first block of files are demos – those mainly showcase the
concepts that are new in each chapter, using small but thoroughly
commented programs. These are named d?_*.cpp in the source bundle.
In this chapter, there are no new concepts, so we will revisit some
of the old ones:
null – working with containers and std::optional
rotsort – leveraging standard algorithms
diff – simple automatic differentiation
The next block of files are elementary exercises (e?_*.cpp),
which should be generally very easy and quick to solve. Reference
solutions for these are provided at the back. For this chapter, it's
these 3 exercises:
count – count vertices reachable from a given vertex
flip – go from map< K, V > to map< V, set< K > >
tfold – fold a binary tree using a given function
A very important part of each lecture are practice exercises,
which are graded (confused? you really need to check out
Chapter A now). You will work on those on your own, each week, for
the remainder of the semester:
Finally, there are the regular exercises, which are going to be
the subject of seminars (classes). Again, reference solutions are to
be found in the back. For the first week, those are:
rekey – map< K, V > × f : K → L ⇒ map< L, set< V > >
tensor – outer (tensor) product of 1D vectors
topsort – topological sort of a directed graph
nfold – N-ary tree fold (input given as a multimap)
integrate – numerical integration of arbitrary functions
In this demo, we will write a few functions that work with
sequence containers that may have holes in them, represented
using std::nullopt. In particular, when applying operations to
those values, we will take the approach that whenever at least
one operand is std::nullopt, the result should also be
std::nullopt.
The first is also the simplest – the pure function filter will
simply remove all missing values and return an appropriately
typed container.
auto filter( const auto &seq )
{
std::vector< std::decay_t< decltype( **seq.begin() ) > > out;
for ( const auto &x : seq )
if ( x.has_value() )
out.push_back( *x );
return out;
}
A slightly more complicated pure function is zip, which
operates on a pair of containers using a caller-provided function
(callback) f.
The function, however, takes parameters of the ‘plain’ type
stored in the container, notstd::optional – hence the rules
for dealing with missing values must be part of zip itself. The
resulting vector of tuples must again be sparse, since holes are
propagated from the inputs.
auto zip( const auto &seq_a, const auto &seq_b, auto f )
{
using res_t = decltype( f( **seq_a.begin(), **seq_b.begin() ) );
std::vector< std::optional< std::decay_t< res_t > > > out;
auto it_a = seq_a.begin();
auto it_b = seq_b.begin();
Finally, we will implement a function (again, pure) join which
takes two sequences and a predicate and constructs a sequence of
pairs (in a manner of a Cartesian product), but only if these
pairs pass the predicate.
Again, the provided predicate accepts the base values and does
not deal with std::nullopt in any way. The rules above mean
that if either of the paired values is std::nullopt, we take
the result of the predicate to be std::nullopt, which we are,
fairly naturally, going to interpret as false in a boolean
context.
Overall, this means that no std::nullopt will ever appear on
the output and we reflect this fact in the return type.
auto join( const auto &seq_a, const auto &seq_b, auto f )
{
using type_a = std::decay_t< decltype( **seq_a.begin() ) >;
using type_b = std::decay_t< decltype( **seq_b.begin() ) >;
std::vector< std::tuple< type_a, type_b > > out;
for ( const auto &a : seq_a )
for ( const auto &b : seq_b )
if ( a.has_value() && b.has_value() && f( *a, *b ) )
out.emplace_back( *a, *b );
return out;
}
Finally, we test all three functions on a few simple inputs, to
make sure they work (and perhaps less obviously that they
compile).
int main() /* demo */
{
using ivec = std::vector< std::optional< int > >;
using jvec = std::vector< std::tuple< int, int > >;
Consider the following problem: we are given a container seq
(or an std::range) and a positive number n, which together
describe a rectangular array of width n and height len / n
(where len is the number of elements in seq).
We are going to assume that iterators of seq can be moved
around efficiently (i.e. they support random access), by the
means of std::advance or std::next. This also entails that
std::distance is efficient on those iterators.
We would like to know whether each row of the input array is
sorted up to a rotation, and if so, perform this rotation to make
it sorted. We are going to do this in place. If any of the rows
is not in the required form, then we aren't going to touch
anything and just return false.
This can be clearly done in linear time, so that is what we are
aiming for. Additionally, using a little extra memory (linear in
the number of rows), we can make the algorithm run slightly
faster, so again, we will do that.
bool rotate_sort( auto &seq, int n )
{
Since we can't touch the array until we know for sure that we
can fix all of it, we will check each row and if it is
fixable, make a note of the distance it needs to be rotated.
This will save us some work in the second pass.
std::vector< int > rot;
The ‘brains’ of the algorithm is in the following loop, which
iterates over the rows (note the std::advance by n). We
use the standard algorithm std::is_sorted_until to find a
candidate for the rotation pivot, then we check whether it is
actually one by verifying that the remainder of the sequence
is again sorted. If this is so, we make a note of the
distance to rotate the row and continue to the next row. If
the candidate fails, the row cannot be sorted by rotation and
the effort is doomed – we simply return right away.
for ( auto it = seq.begin(); it != seq.end(); std::advance( it, n ) )
{
auto end = std::next( it, n );
auto mid = std::is_sorted_until( it, end );
if ( mid == end )
rot.push_back( 0 );
else if ( std::is_sorted( mid, end ) && *it >= *std::next( it, n - 1 ) )
rot.push_back( std::distance( it, mid ) );
else
return false;
}
We have determined that we can fix the array, so we do that
by using std::rotate. Since the array is rectangular, it is
easy to find the bounds of each row from its sequence number,
and the offset of the pivot is of course stored in rot.
auto b = seq.begin();
for ( unsigned i = 0; i < rot.size(); ++i )
std::rotate( std::next( b, i * n ),
std::next( b, i * n + rot[ i ] ),
std::next( b, i * n + n ) );
In this demo, we will implement automatic differentiation of
simple expressions. We will need the following rules:
linearity:
the Leibniz rule:
chain rule:
derivative of exponential:
First, we will define a type, expr (from expression), such
that values of this type can be constructed from integers, added
and multiplied, and exponentiated using function expnat (to
avoid a conflict with the exp from the C standard library; nat
because the base is e, the base of natural logarithm… and while
at it, we also use the awkward cnst abbreviation to avoid a
conflict with the const keyword.
Since we need to process expressions using a program, we will
represent them as trees. And since we expect common
subexpressions to be fairly common and we do not provide a
mutation interface, we can allow subtree sharing using
std::shared_ptr.
Since we are working with expressions in a single variable, each
node of type op_t::var represents the same and only free
variable.
struct node
{
using ptr = std::shared_ptr< node >;
enum op_t { cnst, var, add, mul, exp } op;
int cnst_val = 0;
ptr left, right;
};
Now the value-like wrapper which makes working with expressions
considerably more convenient. In particular, this class lets us
add and multiply expressions using built-in binary operators +
and *. Copying expr values is cheap, because the tree itself
is shared (i.e. it does not need to be cloned) and also easy to
write (compiler will derive all the required operators and
constructors for us).
struct expr
{
node::ptr ptr;
expr() : ptr( std::make_shared< node >() ) {}
expr( int c ) : expr()
{
ptr->cnst_val = c;
ptr->op = node::cnst;
}
expr( node::op_t o, node::ptr left = nullptr,
node::ptr right = nullptr )
: expr()
{
ptr->op = o;
ptr->left = left;
ptr->right = right;
}
friend expr operator+( expr a, expr b )
{
return { node::add, a.ptr, b.ptr };
}
friend expr operator*( expr a, expr b )
{
return { node::mul, a.ptr, b.ptr };
}
};
Now comes the main protagonist – the pure function diff that
accepts a single expr and constructs its derivative (again in the
form of an expr).
expr diff( expr e )
{
switch ( e.ptr->op )
{
case node::cnst: return { 0 };
case node::var: return { 1 };
case node::add:
return diff( e.ptr->left ) + diff( e.ptr->right );
case node::mul:
return diff( e.ptr->left ) * e.ptr->right +
diff( e.ptr->right ) * e.ptr->left;
case node::exp:
return e * diff( e.ptr->left );
}
assert( false );
}
Finally, we will implement a pure function eval which takes an
expr and a double which it substitutes for the free variable
and computes the value of the expression.
double eval( expr e, double v )
{
switch ( e.ptr->op )
{
case node::cnst: return e.ptr->cnst_val;
case node::var: return v;
case node::add: return eval( e.ptr->left, v ) + eval( e.ptr->right, v );
case node::mul: return eval( e.ptr->left, v ) * eval( e.ptr->right, v );
case node::exp: return std::exp( eval( e.ptr->left, v ) );
}
assert( false );
}
That is all – we can now test our functions on a few simple
inputs. Notice the variable x which lets us write down
expressions (expr instances) in a very convenient format.
Given a directed graph g and a vertex v, count all the
vertices in g that are reachable from v. The graph is given
as a value which can be indexed to obtain sequences of vertices.
Vertices have a linear order on them, but are otherwise
arbitrary.
Given an std::map from some keys to some values, produce a
reverse mapping (values become keys and vice versa). Since the
values in the original map need not be unique, assign a set of
keys to each value. Do not change the input map.
Given a binary tree t, a ternary function f and a value i,
fold the tree by substituting i for empty subtrees and using
f to combine two folded subtrees into a bigger tree.
The tree is given as a value that has three indirectly accessible
(via the -> operator) attributes: left and right which are
again trees and a value that is of an arbitrary type that can
be passed as a first argument to f. A tree is empty iff it
evaluates to false in an explicit boolean context. The
behaviour is undefined if you try to access attributes of an
empty tree.
In this exercise, we will look at finding elements in a sorted
sequence of unknown length. Specifically, let haystack be
of a type that provides a member function get( std::size_t )
returning an std::optional value such that:
get( i ) returns std::nullopt if , where is
the size of haystack,
otherwise, get( i ) returns the i-th element of haystack,
indexed from zero.
Write a pure, generic function search( haystack, needle ),
that returns the index of the element of haystack equal to
needle, or std::nullopt if no such element exists. Assume
that elements in haystack are linearly ordered in strictly
ascending order.
The function can make at most calls to get (where
is again the size of haystack).
Recall that a complete binary tree is a binary tree where each
level, except possibly the last one, contains a maximal number
of nodes, and all nodes in the last level are aligned to the
left. Such a tree can be efficiently stored in an array in
level order, i.e. the array stores the key of the root (level
0), then its children's keys (level 1), then all the keys at
level 2, and so on, with every level being stored from left to
right.
Write a generic function is_bst which receives a complete
binary tree in a vector-like6 object tree, and checks whether
tree represents a binary search tree. Assume the keys stored
in tree are totally ordered by the usual comparison operators.
Repetition of a key is allowed in both subtrees of the node
containing that key.
Write a function shortest_path( g, s, t ) which returns the
length of the shortest path between vertices s and t in the
weighted directed graphg, or std::nullopt should no path
between these vertices exist. Assume that all edges in g have
positive integer weights.
The function must be generic over the types used for
representation of the graph and its vertices. Vertices of the
graph, including s and t, are represented by a handle
type. Assuming g is a graph and u, v are handles referring
each to a vertex of g, the following interface is available:
g.successors( u ) returns a collection of pairs ,
where v is a successor of u and l is the distance of
the edge ,
u == v if u and v refer to the same vertex,
u != v analogously,
u < v and other relational operators define a linear but
otherwise unspecified ordering.
You can further assume that handles are cheap to copy.
// std::optional< int > shortest_path( … g, … s, … t );
Write a function msort that receives an matrix with
each row sorted and returns a sorted sequence of all entries in
. The required time complexity is .
The matrix is stored in row-major order. That is, the
function receives a sequence7values of entries of , such
that values contains the first row at the beginning, then the
second row, and so on; and a number cols that equals .
Assume that values has the correct size for a given value of
cols, and that . The entries in values are totally
ordered by the usual comparison operators.
In many applications of matrix algebra, we work with large
matrices that contain a comparatively small number of non-zero
entries. Such so-called sparse matrices can be represented much
more efficiently than by storing all elements.
Your task is to write a function add that performs the usual
element-wise matrix addition of two sparse matrices. That is,
given matrices and , the result of add( a, b )
is .
A matrix is represented by a std::vector of
std::tuple< int, int, T >, where each triple consists of a row
and column indices and T is some arithmetic (i.e.
integral or floating-point) type. Entries not listed in the
vector are equal to zero, and conversely, no entry in the vector
is zero. Additionally, assume that the triples are sorted
lexicographically first by and then by , and no pair of
and repeats. The vector can be empty.
The function returns the result in the same format. Notably,
the resulting vector is sorted in the same way and must not
contain zeroes. The required complexity is , where
and are the numbers of non-zero elements of matrices
a and b, respectively.
Write a pure function mst which computes the total cost of a
minimum spanning forest8 of a given (undirected) graph. The
vertices of the graph are numbered sequentially starting from 0
and their total number is given by vertex_count.
The cost function, when given a pair of vertices, returns
either the cost of the edge connecting these vertices (as a
positive integer) or std::nullopt if there is no such edge. The
vertices are passed to cost as numbers, e.g. cost( 1, 2 ).
a map (or a similar associative container) with keys of type
K and values of type V and
a function f that turns values of type K into values of
type L,
produces a map from L to sets of V – i.e. the keys are mapped
using f while values are unchanged. Multiple old keys can map
to a single new key – collect all values associated to those keys
in a set.
Given a pair of one-dimensional vectors, in the form of an
std::array of numbers, return their outer (tensor) product (a
matrix with at its -th row and -th column.
Write a function, to_binary, which converts a number (given as
digits, a sequence of integers) in an arbitrary input base
(given as base) to a sequence of bits. Assume that the base is
at least 2 and that every input digit is valid (i.e. in range 0
to base - 1). Also assume that the input sequence is not empty
and normalized (i.e. it contains no extraneous leading zeroes).
To constrain a generic (auto) parameter, we simply put the name
of the constraining concept in front of the auto, like this.
Consider a somewhat silly example, where we want to add numbers,
unless one of them is a NaN, in which case we want to ge back an
std::nullopt. Of course, for integers, there is no NaN, so for
these types, our ‘safer’ addition is very straightforward:
auto add( std::integral auto a, std::integral auto b )
{
return a + b;
}
The above function will only participate in overload resolution
if both arguments are integral types. Concepts can be also
relational, in the sense they can describe a relationship between
two types. In a multi-parameter concept, the auto a takes place
of the first parameter and library concepts are designed to
maximize profit from this arrangement. We could just mirror the
above using the std::floating_point concept, but we would run
into a problem – add( 1, 1.0 ) would not compile, since it has
mixed arguments. We would like that to use the floating-point
function, so we can instead specify that the types can be
converted to double – without actually converting them – the
addition operator sees the actual types that we passed into the
function.
auto add( std::convertible_to< double > auto a,
std::convertible_to< double > auto b )
However, at this point we have 2 new problems: first and lesser
is the return type – we want it to be optional but of an
unknown type that comes out of the addition. Luckily, we can
easily give a name to that type in a trailing return (and retain
the luxury of return std::nullopt).
-> std::optional< decltype( a + b ) >
The other problem is more serious though: we have introduced
ambiguity, because integral types are of course convertible to
double and while the first signature of add might look more
constraining to us, it is not for the compiler – concepts are
really just predicates on types.
To fix this, we need to additionally constrain this overload, and
trailing requires is just the feature for that – like in a
trailing return, we can use all parameter names here. Other than
that, constraints (requirements) specified this way have the same
weight as those given in front of auto parameters. In this
case, we knock out the overload in which both arguments are
integral types.
requires( !std::integral< decltype( a ) > ||
!std::integral< decltype( b ) > )
{
if ( std::isnan( a ) || std::isnan( b ) )
return std::nullopt;
else
return a + b;
}
While pre-defined concepts are good for readability (they give a
nice declarative description of function arguments), they are not
always sufficient to capture the constraints that we want to
write. A very powerful construct to create ‘ad hoc’ concepts at
the spot is available in requires – not to be confused with
‘trailing requires’ which attaches a concept to a subroutine
prototype as a whole.
While in function parameters, we cannot use requires in front
of auto10, we can use it in the trailing requires clause,
giving rise to a ‘double requires’ idiom. Let's see that in
action.
We will define a function min which finds the least element of
a sequence. The canonical way to represent sequences would be
using iterators, so we check whether the sequence type has a
begin method and use it if that is the case.
auto min( const auto &data )
requires requires { *data.begin(); }
{
using elem_t = std::decay_t< decltype( *data.begin() ) >;
std::optional< elem_t > result;
auto it = std::min_element( data.begin(), data.end() );
if ( it != data.end() )
result = *it;
return result;
}
However, sometimes we get lazy and define a sequence-like type
without iterators, because writing iterators is not that much
fun. We would still like min to work on those, since it can be
easily written using indexing.
auto min( const auto &data ) requires
We need to specify both indexing and size – if we forget
the latter, the overload would also match raw pointers, but
would then fail to compile in the body, causing even more
confusion in the error messages than customary with C++.
requires { data[ 0 ], data.size(); } &&
However, we now have a similar problem with ambiguity as we
had in the previous demo – there are many types which define
both begin and indexing. For these types, both the above
overload and this one would match. That is no good, because
the overload set becomes unusable. So we again need to
disable the undesirable overloads. Keeping in mind that
requires is really just a predicate that checks whether its
argument compiles and is substituted for a boolean value, we
can simply negate it, like any other predicate.
Only a single name can appear in that position, so we are limited to pre-defined concepts – though we will learn how to write our own concept definitions in the second block.
Implement an universal sort procedure unisort, which picks the
most efficient available algorithm based on the type of the
arguments. The arguments are a pair of iterators which describe a
range of elements to be sorted in-place, just like for
std::sort.
The task in this exercise is to implement a solver for simple
linear equations in two variables of the form:
where are given as parameters to eqsolve. The result
should be any pair of numbers that satisfies the above equation
with the restriction that the results (returned as ) should
be of the same numeric type as the coefficients. If no solution
exists, return std::nullopt. A brute force search is sufficient
when no easy solutions are available (do pick a reasonable search
order though).
In this exercise, your task will be to implement an efficient
reachability check – i.e. answer the question whether there is a
directed path in graph that starts in vertex from and ends in
to. Here comes the twist: the graph may have either (or both)
these methods:
g.get_successors( v ) which returns an iterable sequence of
all vertices w, such that there is an arc from v to w,
g.has_edge( v, w ) – a predicate that returns true if an
arc from v to w exists, and g.nodes is the number of
vertices (which are, in this case, numbered sequentially,
starting from 1).
One of the interfaces is clearly more advantageous – make sure
you use that one whenever possible.
In C++, it is undefined behaviour to shift a number by a distance
that exceeds its number of bits, which can sometimes be
inconvenient (especially in cases where we don't know the shift
distance upfront). Likewise, negative shifts are not permitted.
Finally, there is the additional inconvenience that whether the
shift operator performs a logical or an arithmetical shift is
tied to the type of its arguments. In this exercise, we are going
to fix all these problems.
Implement a pure function logical_shift( x, distance ) which
takes a number x and returns it shifted by the given number of
bits (to the left if distance is negative, right otherwise). A
similar function arithmetic_shift( x, distance ) will do the
same, but with arithmetic shifting (that is, right shift will
copy the sign bit instead of adding zeroes to the left). Finally
to round off, add rotate( x, distance ) that performs bitwise
rotation.
All these functions should return a value of the same type as
that of x. The type of distance is always int.
In this exercise, your task will be to implement a (generic)
function upcast which takes two parameters, an lvalue reference
to a pointer and an lvalue reference to a value of a possibly
different type.
If the pointer is of a compatible type, it is changed to point to
the value passed in the first argument. Otherwise, the program
shall fail to compile. A pointer to a type T is of course
compatible with type T, but also with all its derived types
(directly or indirectly).
In effect, this function implements a safe upcast to a
pointer-to-base type.
Write a generic function find( c, e ) that receives a
generic container c and an value e and returns an iterator
to an element of c corresponding to e. If the type of c
provides a member function find callable on e (with a
const object c), return the result of this call.
Otherwise, you can assume that the type of c provides member
functions begin and end with the usual meaning of a range
of iterators, and the type of *c.begin() is same as the type
of e. In this case, return an iterator it to the first
element between c.begin() and c.end() such that *it == e.
If no such element is found, return c.end().
In this exercise, we will work with a recursive sum type
representing an expression built out of binary operators. Your
task is to write a generic function evaluate( e ) that receives
an expression e and returns the result of evaluating e.
The type of e is std::variant< val_t, op_t >, where val_t
is the type of values appearing in the expression (e.g. int
for an arithmetic expression, bool for a logical expression,
etc.) and op_t is a pointer to a type that represents a binary
operator. The operator type provides member functions left and
right which return the left and right subexpressions,
respectively, and a function apply( a, b ) that returns the
result (of type val_t) of applying the operator to two values
a and b of type val_t.
Note that the function evaluate returns a value of type
val_t. You will need to recover this type from the type of the
argument in order to declare the return type. It is guaranteed
that val_t is the first type parameter of the variant.
In this exercise, we will take a look at formatting nested
vectors of numbers. Write a generic function format that
receives an object v, such that v either is of an arithmetic
type (i.e. it is an integral or floating-point number) or is an
arbitrarily nested vector of an arithmetic type. That is, the
type of v is
std::vector< std::vector< ... std::vector< num_t > ... > >
for some arithmetic type num_t and zero or more applications
of std::vector.
The function should return a string satisfying the following
rules:
an integral value is formatted in the obvious way (i.e. in
base 10, without spaces and possibly with a leading minus
sign),
a floating-point value should be formatted to at least 6
decimal places, optionally excluding trailing zeroes, with dot
as the decimal separator,
a vector should result in a comma-separated list of its
elements, enclosed in curly braces and with spaces for better
readability (e.g. the vector of 1, 2 and 3 should be
formatted as ‘{ 1, 2, 3 }’, the empty vector as ‘{}’, and a
vector of vectors might result in ‘{ {}, { 1, 2 }, { 3 } }’).
Let us combine the ideas of recursive variants and nested
vectors. Write a function flatten that receives a nested
vector v of integers and returns a vector of integers that
contains precisely the elements of v, ignoring any nesting.
The resulting vector should contain the numbers in the same
order as they appear in v.
The type of v allows for heterogeneously nested vectors (i.e.
the nesting depth of individual integers in v may differ), so
v can represent for example the collection
{ 1, 2, { 3, 4 }, {}, { { 5 } } } (whose flattening is
{ 1, 2, 3, 4, 5 }).
Technically, v is of type std::variant< int, inner_t >,
where inner_t is a pointer to an iterable collection (that is, a
type providing the usual iterator operations begin and end)
of elements of type std::variant< int, inner_t >.
Write a function fibarr which takes a single parameter – an
std::array of arbitrary type and size (assume the type is copy-
and default-constructible though) and constructs a copy of the
array, such that the resulting array is at least as big as the
input array, and its size is a Fibonacci number. If needed, pad
the new array with default-constructed values.
You have probably already heard about Conway's Game of Life. In this
assignment, you will implement a similar 2D cellular automaton with slightly
more complex rules. Instead of a single organism, we shall simulate a fight
between two different organisms (blue and orange cells). The position where
a cell has died will be uninhabitable for a few rounds, and we shall have
slightly different rules for when cells come into existence and die. In
addition, our “world” will be unbounded and will contain “poisoned” areas
where no cells survive.
The state of the world is given by an std::map whose keys are 2D
coordinates and whose values are integers ranging from one to six:
number 1 represents a living blue cell,
number 2 represents a living orange cell,
numbers 3 to 6 represent a position where a cell has previously died
(the larger the number, the more time has passed since).
Positions not included in the map are empty.
As in Life, we consider the neighbourhood of a position to be the
positions adjacent in all eight directions, i.e. including diagonals.
The basic rules of evolution are as follows:
If there are exactly three living cells in the neighbourhood of an empty
position, a new cell will be created there in the next round. The colour
of the new cell matches the majority colour of the living cells in the
neighbourhood. Otherwise, the empty position remains empty.
If there are three to five living cells around a living cell (no matter
the colour), the cell remains alive in the next round (and keeps its
colour). Otherwise, the cell will die, and the state of that position in
the next round will be the number 3.
If the state of a position is a number from 3 to 5, it will have a state
one higher in the next round.
If the position has a state of 6, it will be empty in the next round.
The poisoned positions are given extra (as a std::set of coordinates)
and change the basic rules so that all living cells in poisoned positions
and in their neighbourhood always die, and no new cells are ever created
in those positions.
Yor goal is to implement the function evolve that gets the initial state
of the world initial, the set of poisoned positions poison and the
number of rounds generations and returns the state of the world after the
given number of rounds.
using pos = std::tuple< int, int >;
using state = std::map< pos, int >;
state evolve( const state &initial, const std::set< pos > &poison, int gen);
Alphabetic characters are those for which std::isalpha() returns
true. It is okay to accept additional whitespace where it makes
sense. For the semantics of (ISO) EBNF, see e.g. wikipedia.
The parser should be implemented as a toplevel function called
parse that takes a single std::string_view argument. If the
string does not conform to the above grammar, return
std::nullopt. Assuming expr is a string with a valid
expression, the following should hold about auto x =
parse(expr):
an x.is_foo() method is provided for each of the major
non-terminals: compound, atom, literal, bool, number,
string and identifier (e.g. there should be an is_atom()
method), returning a boolean,
if x.is_compound() is true, x.size(), x.begin() and
x.end() should provide access to sub-expressions (which
should again provide the same interface),
casting x to bool should yield true for #t, non-zero
numbers, non-empty strings and non-empty compound expressions,
if x.is_number() is true, basic arithmetic (+, -, *,
/) and relational (<, >, ==, !=) operators should
work (e.g. x < 7, or x * x) as well as an explicit
conversion to int,
x == parse(expr) should be true (i.e. equality should be
extensional),
x == parse(x.to_string()) should also hold.
Operations that are invoked with (dynamically) ill-typed
arguments should throw an instance of std::invalid_argument.
This includes arithmetic on string or compound values, iteration
on atomic values (numbers, strings, booleans) and so on.
A few examples of valid inputs (one per line):
(+ 1 2 3)
(eq? [quote a b c] (quote a c b))
127
(concat "abc" "efg" "ugly \"string\"")
(set! var ((stuff) #t #f))
(< #t #t)
Note that parse(expr).to_string() == expr does not need to
hold. Instead, to_string() should always give a canonical
representation, e.g. this must hold:
You have probably seen “a diff” (or “a patch”) before – listing of lines
added to and deleted from a text file. In this assignment, you will implement
a somewhat more generic version of a diff algorithm, which is not limited to
text lines, but compares two series of any equality-comparable elements. Use
the LCS (longest common subsequence) algorithm to compute the differences.
The function you will be implementing, diff, takes two pairs of random
access iterators (begin and end of the two compared sequences) and an output
iterator to which it will write a sequence of so called hunks. You can
think of hunks as a generalisation of lines in the output of the traditional
diff utility; i.e., each hunk contains a value of one of the original
sequences and a flag indicating whether the value is common to both sequences
or appears only in the first (left) or the second (right) one.
For simplicity, hunks will be represented by a std::pair< side, value >. As
is traditional, “deletions” (i.e., the hunks only appearing in the left
sequence) come before “insertions” (i.e., the hunks only appearing in the
right sequence) that happen in the same place.
enum class side { both, left, right };
void diff( std::random_access_iterator auto l_begin,
std::random_access_iterator auto l_end,
std::random_access_iterator auto r_begin,
std::random_access_iterator auto r_end,
std::output_iterator< std::pair< side, decltype( *l_begin ) > > auto out );
Imagine a board game similar to the popular Carcassone series. The game
plan consists of tiles placed on an imaginary square grid (not
necessarily in one continuous group). Each tile represents a piece of
road or an intersection and possibly a village.
For our purposes, every tile is given as a set of directions that may be
used to leave the tile and Boolean information about the presence of
a village. Note that the set of directions may be empty (there are no
roads out of the tile) or a singleton (the tile is a dead end).
The game plan is given as an std::map whose keys are 2D coordinates and
whose values are the tiles. The coordinates are given in a
format, where increases eastward and southward.
First, implement a predicate is_valid to check whether all the tiles
are correctly connected, i.e. if we can depart a tile in a chosen
direction, another tile lies one position away in that direction, and
additionally, it is possible to go back from that tile.
We would then like to create a village graph of the game plan. The
village graph is a weighted undirected graph whose vertices are all the
positions of the villages. The graph contains an edge between two
vertices if it is possible to go from one of the villages to the other
without encountering any intermediate villages along the way. The weight
of the edge is the length of the shortest such path, i.e. the number of
transitions between tiles we have to make.
Implement a pure function village_graph to compute the village graph
of the given game plan. The precondition of this function is that the
plan satisfies the is_valid predicate. The graph is represented in
an adjacency-list manner, only instead of lists, we use std::maps,
i.e. if two villages on positions p1 and p2 are connected with an edge
of weight w, it shall hold that g.at( p1 ).at( p2 ) == w.
In this assignment, we will implement a simple virtual machine for a small
subset of the language Lisp. The machine has an unlimited number of
polymorphic registers indexed by integers and capable of storing three types
of values: integers, cons-es and a special value nil. A cons is a pair
, where both and can again hold integers,
references to other cons pairs, or be nil.
The instruction set is a simple variant of the three-address code. That is,
each instruction has a possible result and at most two operands. The result
is always a register, and the operands can be registers, integer constants,
or nil in a few special cases.
The machine supports instruction described in the following table. Note that
not all instructions have a result or two operands. A lack of one is
indicated by “-” and implemented by a nil constant in the place of the result
or operand. If an instruction expects a register, the corresponding place is
marked with r. Otherwise, the value v can stand for either a constant or
a register (in which case the value of v is the value of the register with
the corresponding number), or nil in the case of set, set_car and
set_cdr.
opcode
res
op₁
op₂
description
set
r
v
-
set the value of r to v
cons
r
-
-
set the value of r to
set_car
-
r
v
set the car of cons in r to v
set_cdr
-
r
v
set the cdr of cons in r to v
car
r₀
r₁
-
store the car of cons in r₁ to r₀
cdr
r₀
r₁
-
store the cdr of cons in r₁ to r₀
add
r₀
v₁
v₂
store v₁ + v₂ to r₀
sub
r₀
v₁
v₂
store v₁ - v₂ to r₀
mul
r₀
v₁
v₂
store v₁ * v₂ to r₀
jnz
-
v₁
v₂
if v₁ is not zero, jump to
instruction at ip + v₂, where
ip is the current instruction
number; otherwise do nothing
hlt
-
v
-
stop with the result v
struct nil_t {};
struct reg_t { int reg; };
struct const_t { int val; };
using val_t = std::variant< nil_t, reg_t, const_t >;
Your task is to implement the function machine::run which interprets a
given vector of instructions is. The function returns the integer value
computed by the given program, i.e. the value given as a first operand to the
first encountered instruction hlt. The rest of the structure machine is
up to you, however, the type needs to be default constructible.
You may assume that all instructions have an opcode listed in enum
insn::op_t. However, you must deal with several erroneous situations by
throwing an exception. Specifically, throw:
bad_insn if you encounter an instruction that has a result, but it has
no result register (i.e., the given result is either nil or a constant);
overrun whenever the machine overruns the list of instructions for any
reason (e.g. a bad jump, a missing hlt, etc.); and
type_error if an arithmetic instruction (add, sub, and mul), jnz
or hlt is given anything else than integer operands; if set_car,
set_cdr, car, and cdr receive a non-cons first operand; or if a value
is nil unless it is allowed to be nil (see the description of the
instruction set).
Registers without an explicitly set value store nil. Of course, the solution
is expected to free all allocated memory. However, you may assume that a
cycle between cons pairs cannot happen (that is, programs that cause a
situation where cons points to cons (in either car or cdr, and even
transitively), while also points to , need not be supported).
Your task will be to implement a simple solver for ‘same game’.
The rules of the game are:
the game is played on a rectangular board with
rectangular cells,
a cell can be empty, or occupied by a ‘stone’ of a particular
type (we will use numbers to represent these types, and
std::nullopt to represent an empty cell),
the player can clear an area of 3 or more identical adjacent
stones (each stone has 4 neighbours); all matching stones are
removed at once,
the game ends when no more stones can be removed.
Unlike most versions of the game, to keep things simple, we will
not implement gravity or replenishment of the stones. The scoring
rules are as follows:
the base score of removal is the value of the stone times
the number of stones removed from the board times the
base-2 logarithm of the same: , the entire
number rounded to the closest integer,
this score is doubled if at least one of the removed stones
is directly adjacent to a cell that was occupied before the
last round (i.e. it belonged to a stone that was removed in
the last round),
it is also doubled if the last removal was of the same type of
stone (note that conditions 2 and 3 are mutually exclusive).
When the game ends, the scores for each round are summed: this is
the game score.
Write a function samegame which takes 2 arguments: the initial
board in the form of a single vector of numbers (with
std::nullopt used to represent empty spaces) and the width of
the playing board. You can assume that the number of items in the
list will be an integer multiple of the width. The result of the
function should be the maximum achievable game score.
auto tfold( const auto &t, const auto &f, const auto &i )
-> decltype( f( t->value, i, i ) )
{
if ( t )
return f( t->value,
tfold( t->left, f, i ),
tfold( t->right, f, i ) );
else
return i;
}
auto rekey( const auto &data, auto f )
{
using key_t = std::decay_t< decltype( f( data.begin()->first ) ) >;
using val_t = std::decay_t< decltype( data.begin()->second ) >;
std::map< key_t, std::set< val_t > > result;
for ( const auto &[ k, v ] : data )
result[ f( k ) ].insert( v );