This walk-through is targeted at the performers under contract to create Challenge Binaries (CBs) for the Cyber Grand Challenge. This walk-through will guide CB authors through submitting a completed challenge binary. Here they will also find details of the challenge binary acceptance criteria including how submissions will be assessed and feedback processes for remediation. CGC competitor teams may also be interested in this guidance to CB authors.
CBs are network services that accept remote network connections, perform processing on network-supplied data, and interact with remote hosts over network connections. CBs will be used as analysis challenges within the CGC program; CGC teams will develop technology that will attempt to locate and mitigate flaws in CBs. Each CB will be implemented as a network service which performs some task to be determined by the CB author; examples include (but are not limited to) file transfer, remote procedure call, remote login, p2p networking. While CB tasks should mirror real world tasks, the use of real world protocols is disallowed. CBs may contain methods of operation which mirror challenging cases in real world network defense: dynamic network resource allocation, high integrity execution, dynamic execution, etc. Each CB will contain at least one security flaw hidden in the program by that CB author and reachable via network input. Flaws should focus on traditional memory corruption flaw types.
Superior approaches will demonstrate knowledge of the problems involved in creating challenge software for the purpose of cybersecurity competition (e.g., binaries of excessive difficulty prevent any competitor from making progress, while binaries of limited difficulty prevent meaningful measurement). Strong CB authors will demonstrate knowledge of the current limits of automated cyber reasoning in terms of program complexity and flaw discovery difficulty; this knowledge is essential in order to create a collection of CBs that spans a difficulty range from challenging to beyond state-of-the-art. The task of creating novel hidden software flaws to challenge the leading edge of program analysis poses significant technology risk. CB authors are expected to overcome this risk with a representative corpus of Challenge Sets. Strong CB authors will cover a history of known software flaws that represent interesting analysis challenges, mapped to specific CWE categories that will be represented within the CS portfolio of the author.
All items required when submitting a CB are source not binary. All binary components must be built as part of the compilation process.
Note that many of the items detailed in this document are created and instructions are followed implicitly by the tools and build process provided by the CGC dev team.
Generally, if a CB author has followed the CB building guide and used the provided tools, the resulting Challenge Set should already be well-formed and simply needs to be submitted as detailed below in the "Submission" section.
A special flag page is mapped into every Challenge Binary at address 0x4347C000 and filled with 4096 pseudo-random bytes uniquely generated for every connection. This special flag page represents precious external data to the CB. A competitor acquiring and reporting four contiguous bytes from this flag page constitute proof of vulnerability - called a memory-disclosure type 2 PoV.
Challenge Binary authors are strongly encouraged to make the use of this special flag data intrinsic to their program and service poller. To support this, the flag page data is also made available to the service poller to allow for functional verification of the contents. For example this flag page could be treated as a database of authentication credentials, a filesystem of secrets, the starting position of items in a physics simulation a dictionary, or the description of a maze in a game. Replacement binaries which corrupt the legitimate use of this page should create a measurable failure of functionality.
Challenge Test module comprised of Service polls
Source code
Run 'make clean' before to copying files to the submissions branch.
svn copy svn+ssh://server/trunk/
``` AAAAA_DDDDD- |- src
```
The following is an incomplete list of tests and validations that will be performed on the CS after it is submitted. This list is intended to provide guidance for authors as to the types of tests that will be performed but is not intended to provide a complete list. Please note that this list may be amended at any time to better address the program goals.
Collated statistics will be provided on a dashboard on the submission/svn server, showing anonymized results for CWE coverage as well as how the CB fares against a variety of program analysis utilities. These statistics are intended to keep the CB authors apprised of the overall composition of CGC. These statistics are explicitly covered by the CGC NDA and not for distribution outside of the CGC team.
For information regarding CFE POV types, see understanding-cfe-povs.md
For information regarding the testing a CB, see testing-a-cb.md
For information regarding the debugging a CB, see debugging-a-cb.md
For information regarding the building a CB, see building-a-cb.md
For information regarding automated generation of polls, see understanding-poll-generators.md
For information regarding POVML, see replay.dtd (DOCTYPE specified at the top of example polls
See the service-template CB for an exemplar CB including, source, libraries, identified vulnerability, POVs, polls, etc
For support please contact CyberGrandChallenge@darpa.mil
Curated by Lunge Technology, LLC. Questions or comments? Send us email