Categories
CTF Open Source Open Source OSINT Tech

Judging a Tracelabs CTF

Judging?

After the June CTF I was considering participating in a couple more events in a competitive manner. The goal being to get better at the analytics side of things as it is not limited in scope to OSINT for missing people. I like Nicole’s Cognitive Stairways of Analysis as a foundational framework (https://threathuntergirl.com/f/the-cognitive-stairways-of-analysis). However, the intent to be competitive implies preparation, and life got in the way; with 2 team mates down I could only hope for a 3rd time in top 50. So for the Defcon edition of the Tracelabs CTF, I applied to judge.

Preparation

Much like for competitive participation, judging requires some preparation. You want to be the best judge for your team(s). However, the preparation is not as extensive as for competitive. In my case it consisted of catching up with the judge’s briefing & reading the judge’s guide. Following that it was a matter of preparing a way of (easily) keeping track of the accepted flags. Joplin comes pre-installed on the Tracelabs VM, so I created the notes with that tool. I have migrated them to Obsidian as Obsidian uses less metadata within the files. A template for other judges or people participating solo is available on my github. This should be fairly easy to import into Obsidian, Joplin or Notion as it is a simple collection of markdown files. Links may need to be converted if using Joplin instead of Obsidian.

Having the guidelines quickly at hand meant I could easily copy-paste reasons for rejecting a flag, or validate if a flag was in the right category. (Family vs Friends, Basic vs Advanced subject info, etc.). By keeping track of the submitted flags I could quickly spot duplicates.

For future judging I would look into having something else to do, as the flags seem to come in batches wether you have 1, 2 or 3 teams. I found myself often with 10-15 minute interludes between batches of flags, specially at the begining while the teams were getting warmed up.

The process

You might expect the flags to come pouring in with no break – except, of course the tams need to start their research which means until the first flag comes in you’re left finalising your preparations. As you will need to be verifying flags from various social media elements, one element of preparation is ensuring you still have access to the various platforms. I left the various platforms open such that I could then copy-paste the URLs with no need for logging in & avoiding cross-contamination between the isolated tabs (Firefox Containers does wonders for this).

The first flag comes in and suddenly the platform makes a lot more sense to when competing. (Which hopefully means less rejections at the next competition). The flag includes the team name so let the team know I’m judging for them in the comms channel, this way they can discuss any questions more easily than interposed flags & rejections. Test the flag (is it a new flag?) and validate it (does the flag make sense?) accept or reject. Upon rejecting, explain why – having the guidelines handy makes it easier to explain why a flag is rejected or category changed.

You have 2-3 teams. Life happens so some judges will take on 4. If any flags seem odd, or you are unable to verify them other judges can help, this means you can then provide your team with the best possible experience.

Post CTF

The Tracelabs CTFs can expose some of the worst that humanity has to offer. As such, it’s important to create an appropriate disconnection from what you’ve experience before shutting down. See Nicole’s talk at ConINT on strategies for disconnecting:

In my case, it was just catching up on a comedy TV show and crashing for the night. Had there been any significantly traumatic elements having the other judges to discuss and externalise any thoughts was/is reconforting. It is made abundantly clear that if any judge needs to step away at any time for any reason, it is not a fault on their person, and other judges are happy to take on the extra flags. Knowing when and how to seek help is a part of preparation, but the Trace Labs team make sure to remind the judges about their availability should a crisis happen.

Learning outcomes

During competitions I found the platform to make it difficult to explain the value of the flag. You can establish the category, the relevance and the supporting evidence, along with 1 attachment. As a judge it actually made a ot more sense: Submit the URL pointing to the flag. Relevance: explain the flag (i.e. Category => sub category, what cn be done with this piece of information?). Supporting evidence => How did you get to this conclusion? If you got to a profile from a different URL, submit it here. As a jduge, it is a lot easier to follow your reasoning, and during recompilation the flag will be extra useful. Attachment => visual aid. Screenshot of a social media account isn’t ideal, however, being able to show how you can proove it belongs to a person adds value to the flag.

10/10, would judge again

Categories
CTF Open Source OSINT Tech

Trace Labs CTF 2021.6

How did it go?

I participated in a team of 4 who didn’t know each other and somehow made it to the top 50. We weren’t quite able to communicate with the judge as well as we would have liked, and had several high point flags rejected with little feedback as to the decision but again, still top 50. We kept team communications to Slack with external tools for sharing the flags we found. In the previous CTF we had tried sharing the flags through Slack, but the volume of data & information being shared was too high for that to be a viable solution. Some things went particularly well, others a little less, here is my after-action review to help prepare the next CTF a little better.

What worked

So if we made it to the top 50 some things worked for the team. Looking back I think it was a mix of the following:

  • Constant communication
  • Flag sharing
  • Preparation
  • Time keeping

It’s very easy to say that you have a communication channel set up, but if your team isn’t actively sharing with the team, you’re not communicating. When someone needed a flag looked at, specific tools or otherwise, we were able to provide this in a timely manner and therefore submit a few more flags through pivoting.

We made sure to keep a record of the flags before submitting them (flag, screenshot & intel value). This meant that as we rotated the research between the team, the person taking over the research could quickly assess the situation and continue digging with full grasp of what the team had. This is a big step forward from the last CTF, where this was missing, and led to a lot of research being done twice or thrice.

A big step forward as part of the preparation was establishing a workflow, or baseline of what information was to be submitted within the TraceLabs guidelines. I wish the CTF platform would make it such that each submitted flag would not just be within the category, but also directly which sub-element it belongs to, as that would certainly make sharing flags a lot easier (and possibly provide more value for less effort on the analytics side), but that is wishful thinking. This workflow consisted of establishing a baseline with the categories, and when the CTF started all we had to do was duplicate it for each open case. As mentionned, we could rotate cases between us and the entire element was available.

Finally, time keeping. We tried to keep rotations to 45-75 minutes. A big element of the rotations was putting the flags on our communications tool, but it also meant research didn’t need to be repeated. By doing this we avoided going into rabbit holes and instead kept time use to finding valuable information and tunnel vision was almost non-existent.

What didn’t work

We made it top 50, but why not top 20? Experience & lack of time working as a team aside, some frictions are bound to arise which lead to time being wasted. In my case a big time waster was a lack of structure & experience with the tools. OSINT != tools; however there are some tools that make it a lot easier to extract value from single pices of data. Some of these will provide analytics or additional information on social media, others may provide enumeration.

A major time sink was Maigret. I had previously downloaded Maigret to a development machine & tested it there. Because it provides a far more complete report than Sherlock or What’s My Name I set it up on my OSINT VM, same as in the development machine… Except when it came time to search an username, it didn’t want to run. I tried setting it up again with pip, as root and as not-root, but after 5 minutes it was not worth sinking more valuable time into.

The other time sink was the internet bookmarks. It’s very easy to get a list of bookmarks for social media analytics or specific search engines / automated processes. It also takes time to curate these and verify how they work. I’d used the OsintCombine bookmarks during the last CTF, but this time round several sites had changed in their interface & processes. The lack of experience with this made it difficult to move forward with these as tools to pivot from obtained data points.

The second issue I had was with my web shortcuts. It’s very nice to have lots of shortcuts at my disposal to pivot from data sources. However, if these are not maintained & the process for these regularly tested, they also becomes a time-sink while looking for one which provide the wanted service.

Finally, one thing I found was missing was a checklist-style procedure as the initial enumeration. One possibility might be using bookmarklets to provide the initial research and then enumerate from there. This muscle-memory style procedure is an element I will be working on for the next CTF.

Whats in the future?

Most likely I will be participating in yet another CTF after which I’ll be applying to judge for a couple CTFs. This will help with the learning experience & there seems to be a massive lack of judges in comparison to the number of participants.

I’m also definitely going to be sorting out Maigret on the OsintVM & absolutely setting a baseline 1 week before the event whereby I can check the tools & bookmarks before the event.

And if you want to participate?

You don’t need any special knowledge or experience to participate in these CTFs. The great advatange of the TraceLabs CTFs is that your own personal experience provides an advantage in the research process. Perhaps you speak a language that provides you an additional edge, or perhaps you happen to be knowledgeable in certain niche subjects which provide some additional insight into what information can be extracted from data points.

More details https://www.tracelabs.org/