Categories
CTF Open Source OSINT Tech

Trace Labs CTF 2021.6

How did it go?

I participated in a team of 4 who didn’t know each other and somehow made it to the top 50. We weren’t quite able to communicate with the judge as well as we would have liked, and had several high point flags rejected with little feedback as to the decision but again, still top 50. We kept team communications to Slack with external tools for sharing the flags we found. In the previous CTF we had tried sharing the flags through Slack, but the volume of data & information being shared was too high for that to be a viable solution. Some things went particularly well, others a little less, here is my after-action review to help prepare the next CTF a little better.

What worked

So if we made it to the top 50 some things worked for the team. Looking back I think it was a mix of the following:

  • Constant communication
  • Flag sharing
  • Preparation
  • Time keeping

It’s very easy to say that you have a communication channel set up, but if your team isn’t actively sharing with the team, you’re not communicating. When someone needed a flag looked at, specific tools or otherwise, we were able to provide this in a timely manner and therefore submit a few more flags through pivoting.

We made sure to keep a record of the flags before submitting them (flag, screenshot & intel value). This meant that as we rotated the research between the team, the person taking over the research could quickly assess the situation and continue digging with full grasp of what the team had. This is a big step forward from the last CTF, where this was missing, and led to a lot of research being done twice or thrice.

A big step forward as part of the preparation was establishing a workflow, or baseline of what information was to be submitted within the TraceLabs guidelines. I wish the CTF platform would make it such that each submitted flag would not just be within the category, but also directly which sub-element it belongs to, as that would certainly make sharing flags a lot easier (and possibly provide more value for less effort on the analytics side), but that is wishful thinking. This workflow consisted of establishing a baseline with the categories, and when the CTF started all we had to do was duplicate it for each open case. As mentionned, we could rotate cases between us and the entire element was available.

Finally, time keeping. We tried to keep rotations to 45-75 minutes. A big element of the rotations was putting the flags on our communications tool, but it also meant research didn’t need to be repeated. By doing this we avoided going into rabbit holes and instead kept time use to finding valuable information and tunnel vision was almost non-existent.

What didn’t work

We made it top 50, but why not top 20? Experience & lack of time working as a team aside, some frictions are bound to arise which lead to time being wasted. In my case a big time waster was a lack of structure & experience with the tools. OSINT != tools; however there are some tools that make it a lot easier to extract value from single pices of data. Some of these will provide analytics or additional information on social media, others may provide enumeration.

A major time sink was Maigret. I had previously downloaded Maigret to a development machine & tested it there. Because it provides a far more complete report than Sherlock or What’s My Name I set it up on my OSINT VM, same as in the development machine… Except when it came time to search an username, it didn’t want to run. I tried setting it up again with pip, as root and as not-root, but after 5 minutes it was not worth sinking more valuable time into.

The other time sink was the internet bookmarks. It’s very easy to get a list of bookmarks for social media analytics or specific search engines / automated processes. It also takes time to curate these and verify how they work. I’d used the OsintCombine bookmarks during the last CTF, but this time round several sites had changed in their interface & processes. The lack of experience with this made it difficult to move forward with these as tools to pivot from obtained data points.

The second issue I had was with my web shortcuts. It’s very nice to have lots of shortcuts at my disposal to pivot from data sources. However, if these are not maintained & the process for these regularly tested, they also becomes a time-sink while looking for one which provide the wanted service.

Finally, one thing I found was missing was a checklist-style procedure as the initial enumeration. One possibility might be using bookmarklets to provide the initial research and then enumerate from there. This muscle-memory style procedure is an element I will be working on for the next CTF.

Whats in the future?

Most likely I will be participating in yet another CTF after which I’ll be applying to judge for a couple CTFs. This will help with the learning experience & there seems to be a massive lack of judges in comparison to the number of participants.

I’m also definitely going to be sorting out Maigret on the OsintVM & absolutely setting a baseline 1 week before the event whereby I can check the tools & bookmarks before the event.

And if you want to participate?

You don’t need any special knowledge or experience to participate in these CTFs. The great advatange of the TraceLabs CTFs is that your own personal experience provides an advantage in the research process. Perhaps you speak a language that provides you an additional edge, or perhaps you happen to be knowledgeable in certain niche subjects which provide some additional insight into what information can be extracted from data points.

More details https://www.tracelabs.org/

Categories
CTF Open Source OSINT Tech

Trace Labs CTF 2021.2

What is the Trace Labs CTF?

Trace Labs organise regular OSINT Capture the Flag events, crowdsourcing data about missing people where law enforcement has requested the public’s assistance. According to the value and difficulty involved in finding the clues, they provide the submitting team different number of points. The advantage is that by creating a larger community, there is more varied experience in the people researching and so data points which may have been missed by the top teams are still handed back to law enforcement. I managed to participate in the February 2021 CTF and the team ranked in the top 50 (out of 290!) despite all members being complete rookies.

Preparing for the CTF

In order to isolate the research from my machine, I went with the Tracelabs VM. Imported the nameFinder tool to streamline looking up additional sources for accounts. (I would later discover one of the dependencies was not working as expected, meaning all hits had to be tested manually). Cloned the VM for the CTF and called it quits on the technical side.

For the team I found someone interested in participating in a local hacker group, a mate on the other side of the world and then we found the 4th on the Trace Labs Slack. I got some good feedback from other participants here for last minute prep to which Heather’s feedback was particularly extensive.

During the CTF

The CTF platform allows you to see the active cases, how many flags you’ve submitted and any flags that have been rejected by the judge. Rejections will include comments from the judge explaining why it was rejected, which makes it easier to keep track and resubmit if necessary.

We used a group DM for team communications, and then added an additional group DM with our judge as soon as one was attributed to us. Our judge was particularly pro-active in their communications to us which was very useful in guiding our research.

After the CTF

Once the CTF platform closed, the flags were counted & validated and the top teams & MVO awards were given out. After some celebration with the participants on the good work done that night, I promptly hit the hay as it had been an intense (but highly rewarding) all nighter.

Hindsight == 20/20?

In terms of personal preparation I think there is little more that I could have done between signing up & the contest, simply due to lack of experience in the field. Following this CTF I definitely feel more confident in exploring different routes and having a larger toolkit to pivot from existing datapoints.

As a team, Mon hit it on the head that having appropriate workflows in place is the way to get higher up in the leaderboard. We went at it as 4 individuals, simply divvying up the cases, but without sharing the data points. I’m certain we could have obtained many more flags had we used appropriate data sharing tools. The issue becomes of course trusting the sharing service to not be leaking sensitive data. A solution I’d like to try before the next CTF is using Obsidian MD with Syncthing to share the notes accross team members.