Project 6: Secure Log File

ENEE 457 Section 0101, Fall 2018

Part 1 (Build it) Due 12/14
Part 2 (Break it) Due 12/21


In this project, you will implement a secure log to describe the state of an art gallery: the guests and employees who have entered and left, and persons that are in rooms. The log will be used by two programs. One program, logappend, will append new information to this file, and the other, logread, will read from the file and display the state of the art gallery according to a given query over the log. Both programs will use an authentication token, supplied as a command-line argument, to authenticate each other. Specifications for these two programs and the security model are described in more detail below.

You will build the most secure implementation you can; then you will have the opportunity to attack other teams’ implementations.

You may work in teams of up to three people. Choose a team name and sign up in the spreadsheet. Make sure you include both your team name and the names all team members in your writeups.

You can write your implementation only in C. There is some basic starting code available.

This project will operate using infrastructure from the Build it, Break it, Fix it contest developed at UMD. Please note that your grade in for the project and your score in the contest are not the same (although they are likely to be correlated). Details of grading and scoring are below. Scoring well in the contest is good for bragging rights and for extra credit.


Part 1: Build it

You should submit:

  • Your implementation, including all your code files and your makefile. Even though we will have access to your git repo (details below), you will submit a “final” version to the Canvas/ELMS for grading.
  • A design document (PDF) in which you:
    • Describe your overall system design in sufficient detail for a reader to understand your approach without reading the source code directly. This must include a description of the format of your log file.
    • List four specific attacks you have considered, and describe how your implementation counters these threats. (Please note the relevant lines of code for your defense.) If you were not able to completely (or at all) prevent a threat you identified, you may still mention it. In that case, describe any partial mitigation you implemented and explain anything you wanted to implement but were for whatever reason unable to. Please be clear about distinguishing what you have implemented vs. what you would have implemented.

Part 2: Break it

We will assign you three teams’ implementations to examine. You should submit:

  • A vulnerability analysis document (PDF). Choose one of your assigned implementations and describe:
    • Any attacks you found, including a high-level summary and enough detail for someone to replicate the attack
    • Any vulnerabilities you found but were unable to exploit for whatever reason.
    • If you did not find any attacks or vulnerabilities, describe how you looked for attacks.
  • Also submit any code you wrote to implement your attack. Make sure the vulnerability analysis explains how to use your code to launch the attack.

If you demonstrate a working security break, then your vulnerability analysis document only needs to include your description of that one vulnerability, you don’t need to go any further.

Please note that while correctness bugs count for the contest, they do not count for your vulnerability analysis – for this analysis, you must identify security-relevant issues. For points in the contest, you are welcome to also look at other implementations beyond the three you have been assigned.

If one of your assigned implementations doesn’t work well enough to analyze, please request an alternate implementation from an instructor.


Part 1 will be worth 100 points:

  • 30 points for the automated correctness tests.
  • 70 points for your design document:
    • 10 points for your description of your approach
    • 15 points for each of the four specific vulnerabilities you defend against. For maximum points, your defense should be correct, fully implemented, and well explained. Well explained attacks that are not fully implemented will receive partial credit.
    • Extra credit (non-refundable as always) will be permitted for one extra attack/countermeasure beyond the original four.
  • The testing infrastructure will include performance tests, as well as correctness tests for optional features. Faster implementations (according to the performance tests) and optional features will receive bonus points in the contest score. Performance and optional tests do not count toward your project grade.

Part 2 will be worth 25 points:

  • A successful attack with a well written vulnerability analysis will receieve full credit.
  • If you do not have a successful attack, then you must write a complete vulnerability analysis for one of the three implementations you were assigned. This analysis must explain either why you could not implement the attack you identified, or provide a convincing argument that there were no exploitable vulnerabilities.

Contest scores are primarily for fun and bragging rights. We may assign some limited extra credit to teams who finish at or near the top, but contest scores do not affect your overall grade directly. You can feel free to attack your friends’ implementations without worrying that you are hurting their GPA.


Your team will design a log format and implement both logappend and logread to use it. Each program’s description is linked below.

  • The logappend program appends data to a log
  • The logread program reads and queries data from the log

logread contains a number of features that are optional. If you do not implement an optional feature, be sure to print unimplemented to standard output.

Look at the page of examples for examples of using the logappend and logread tools together.

Security Model

The system as a whole must guarantee the privacy and integrity of the log in the presence of an adversary that does not know the authentication token. This token is used by both the logappend and logread tools, specified on the command line. Without knowledge of the token an attacker should not be able to:

  • Query the logs via logread or otherwise learn facts about the names of guests, employees, room numbers, or times by inspecting the log itself
  • Modify the log via logappend.
  • Fool logread or logappend into accepting a bogus file. In particular, modifications made to the log by means other than correct use of logappend should be detected by (subsequent calls to) logread or logappend when the correct token is supplied


An oracle reference implementation is provided to demonstrate the expected output of a series of commands run on logread and logappend. Contestants may run the reference implementation by going to the team participation page on the website and clicking on “Oracle Submissions”. Here is an example of the expected input format for the oracle, structured as a series of command-line calls within brackets:

	"logappend -T 1 -K secret -A -E Fred log1",
	"logappend -T 2 -K secret -A -G Jill log1",
	"logappend -T 3 -K secret -A -E Fred -R 1 log1",
	"logappend -T 4 -K secret -A -G Jill -R 1 log1",
	"logread -K secret -S log1"

Build-it Round Submission

Each team should initialize a git repository on gitlab and share it with the enee457 user (Requires Developer status). You MUST NOT make your repository public; doing so will be treated as an academic integrity violation.

Create a directory named build in the top-level directory of your repository and commit your code into that folder. Your submission will be scored after every push to the repository.

To score a submission, an automated system will invoke make in the build directory of your submission. The only requirement on make are that it must function without internet connectivity, it must return within ten minutes, and it must build from source (committing binaries only is not acceptable). We will provide sample makefile for C.

Once make finishes, logread and logappend should be executable files within the build directory. An automated system will invoke them with a variety of options and measure their responses. The executables must be able to be run from any working directory.

Break-it Round Submission

For the break-it round, you will be providing test cases that provide evidence of a bug in one of your target implementations. Bugs can include correctness violations, crashes, integrity violations, and confidentiality violations. Further details will be forthcoming when we get closer to the beginning of the break-it round.

Contest Server