global software
Analysis Competition 2023

Show your analysis skills and win up to $14,000

The contest runs until the first of December

 

Name Team Name Scores Context Sensitivity (ML/BOF/UAF) Field Sensitivity (ML/BOF/UAF) Flow Sensitivity (ML/BOF/UAF) Path Sensitivity (ML/BOF/UAF)
ML
BOF
UAF
1
Vahagn Vardanyan
61
Yes/Yes/Yes
Yes/Yes/Yes
No/Yes/Yes
Yes/Yes/Yes
10/10 10/10 12/12
2
Sergey Morozov
Yummy 61
Yes/Yes/Yes
Yes/Yes/Yes
Yes/Yes/Yes
Yes/Yes/Yes
10/10 10/10 12/12
3
Artem Sinkevich
SCHLANG
61
Yes/Yes/Yes
Yes/Yes/Yes
Yes/Yes/Yes

Yes/Yes/Yes
10/10 10/10 12/12

 

Show all the results
GSAC LATEST NEWS
About the contest

Software has become an integral part of every complex system, permeating various aspects of our lives, from transportation systems and banking systems to government operations, medical devices, gadgets, smart homes, and even toys. Unfortunately, software defects have become a prime target for hackers, who exploit them for illegal purposes. Consequently, ensuring software security has emerged as one of the most critical priorities in the IT industry.

While existing software analysis tools can identify some defects and vulnerabilities in source code, their capabilities are limited. We are launching this contest to advance the field of software security analysis. Its primary objective is to promote the development of complex and efficient algorithms for detecting defects and vulnerabilities. We aim to engage both young talents and experienced professionals, promoting innovation and collaboration in this crucial domain.

Why is this important to the industry?

In the age where digital systems power every aspect of our lives, software security stands as a paramount concern. The rapid proliferation of cyber threats and attacks underscores the urgency for robust defense. The GSAC 2023 contest plays a pivotal role by rallying the brightest minds to fortify software security measures.

This initiative not only elevates the standards of protection but also cultivates a community of experts committed to safeguarding the integrity of our digital world. By addressing these challenges head-on, GSAC 2023 significantly contributes to the resilience and trustworthiness of the entire tech industry.

Why are we running a contest?

The landscape of software security is a battleground where innovation and vigilance are essential. GSAC 2023 is not just a contest; it’s a beacon of progress. By challenging participants to develop cutting-edge software analysis tools, we catalyze breakthroughs that can counter the evolving tactics of cyber threats.

This contest serves as a platform to drive collaboration, inspire creativity, and push the boundaries of what’s possible in cybersecurity. Our mission is to foster a generation of security pioneers equipped to tackle the complexities of the digital age and safeguard the technological foundations of our society.

What other competitions are held in the industry?

The tech world thrives on innovation, and various competitions serve as proving grounds for groundbreaking solutions. GSAC 2023 joins the ranks of esteemed contests that challenge participants to excel in the realm of cybersecurity.

From capture-the-flag challenges to hacking competitions, the industry fosters a culture of continuous improvement by providing platforms where experts and enthusiasts can showcase their prowess. GSAC 2023 stands out by specifically targeting the development of software analysis tools, addressing a critical need in the ongoing battle against cyber threats and solidifying its place among the most impactful industry competitions.

Why is code analysis important?

1. Security Enhancement: Analyzing source code helps identify vulnerabilities and weaknesses that can be exploited by malicious actors. By detecting and rectifying these issues early in the development process, potential security breaches and cyberattacks can be significantly mitigated.

2. Bugs and Defects Detection: Source code analysis helps pinpoint coding errors, bugs, and defects that might lead to software crashes, data corruption, or unexpected behavior. Detecting and rectifying these issues before deployment enhances the software’s stability and reliability.

3.Cost Efficiency: Addressing issues during the development phase is more cost-effective than fixing problems post-release. Source code analysis allows for early identification and resolution of issues, reducing the need for expensive post-release bug fixes and updates

Russian-armenian university word
Winning Categories and Prizes

There will be a total budget of $30K allocated for eight winning categories:

х2
Two Silver medals with $5K awards each
2
One Gold medal
with a $10K award
1
х3
Three Bronze medals with $2K awards each
3

The medal winners will be determined based on the combined score across all test cases (30+).
Test cases from the easy, medium, and hard groups will be assigned weights of 1, 2, and 3,
respectively, during the scores combination process.

Additional categories
One Tool Performance
award with a $2K prize

The Tool Performance award goes to the top
6 participants. They’ll have their code tested
on OpenSSL and FFmpeg. The winner is the
one with the fastest, crash-free running time

One Clean Code award
with a $2K prize

The Clean Code award goes to the top 6
participants. We evaluate code readability and
modularity through manual review and test it on
OpenSSL and FFmpeg. Winner decided by review
score and crash-free execution on projects.

Technical Details

Base Tool

The LLVM-based template project is exclusively available for developing competition tools. It is designed specifically to provide access to all necessary analysis for the tools.

Testing System

The Testing system will generate a an F1 score for all tests, which will determine the ranking of participants. Additionally, the system will compare the results of the developed tools with the CSA.

Test Cases

There are three categories of test cases: memory leaks, buffer overflows, and use-after-free errors. There will be at least 10 test cases for each error category, organized into three groups: 3 easy, 5 medium, and 2 hard cases. In each category there are a total of 8 available Test Case Properties for participants to utilize.

Runtime Limitations for Tests

The tool needs to analyze the given set of about 30 small test cases within a 10-minute timeframe. The tool must finish its work on OpenSSL and FFmpeg projects in less than 10 hours.

Testing hardware

The evaluation will take place on a PC/VM equipped with an Intel Core i7 processor and 32GB of RAM.

Test case Properties Easy Medium Hard
Local variable
Field
Inter-procedure
Macro
Pointer arithmetic
Indirect function call
Global variable
Cross files
Winner Selection and Ranking

The winners of the contest will be selected based on scores from the testing system and a manual review of the projects. The manual review will be conducted by specialists from the Center of Advanced Software Technologies (Russian-Armenian University). Projects that do not meet the contest policy may be disqualified during the manual review process.

Determination Rating

To determine the ranking of each team or individual for the Gold, Silver and Bronze medals, we will combine scores across all test cases (30+). Test cases from the easy group will receive a weight of 1, those from the medium group will receive a weight of 2, and test cases from the hard group will receive a weight of 3. If two teams have the same final score for all test cases, the ranking will be determined based on overall false positives count, with a lower number being preferable.

The Tool Performance

The Tool Performance award will be presented to the team or individual ranked within the top 6. Among the selected top 6, we will execute their code on two large open source projects, OpenSSL and FFmpeg. The winner of this award will be determined based on the running time of their tool(with a preference for shorter times) without encountering any crashes.

The Clean Code

The Clean Code award will be presented to the team or individual ranked within the top 6. To evaluate the readability and modularity of the provided source code, we will conduct a manual review. Additionally, their tool will be executed on two large open source projects, OpenSSL and FFmpeg. The recipient of this award will be determined based on the review score and the successful execution of the tool (without any crashes) on the mentioned projects.

The Rules
of the Contest

Anyone, be an individual or part of a team with a maximum of four members, from any country, can participate in this contest. Both beginners and experienced specialists are welcome to participate.

Teams or individual experts are required to develop a static analysis tool utilizing the LLVM compiler infrastructure. Additionally, the KLEE symbolic execution engine may be used. The primary objective of the tool should be the detection of memory leaks, buffer overflows, and use-after-free errors.

A template project will be provided as the development base for participants. The format of error reports will be fixed to enable automatic evaluation of the results.

It is forbidden to use existing tools, besides mentioned ones.

Developed algorithms should be under open source license.

At the start of the contest (September 15th), a special testing system will be provided for participants. The primary objective of this system is the automatic evaluation of developed tools. It will include multiple test cases designed to assess various properties such as path, flow, field   and context sensitivity of the tools.

The testing system’s main purpose is to automatically evaluate the created tools and assign scores for ranking.

Github link

Schedule
August 21th to October 23rd

Registration for the contest will be open

September 15th

The contest will commence

After September 15th

Participants who join, will have less time available for their submissions

December 1st

The final submission of developed tools is due

December 10th

Final results will be announced

December 15th

The awarding ceremony will be held

Show your analysis skills and win up to $14,000