How did it go?
I participated in a team of 4 who didn’t know each other and somehow made it to the top 50. We weren’t quite able to communicate with the judge as well as we would have liked, and had several high point flags rejected with little feedback as to the decision but again, still top 50. We kept team communications to Slack with external tools for sharing the flags we found. In the previous CTF we had tried sharing the flags through Slack, but the volume of data & information being shared was too high for that to be a viable solution. Some things went particularly well, others a little less, here is my after-action review to help prepare the next CTF a little better.
So if we made it to the top 50 some things worked for the team. Looking back I think it was a mix of the following:
- Constant communication
- Flag sharing
- Time keeping
It’s very easy to say that you have a communication channel set up, but if your team isn’t actively sharing with the team, you’re not communicating. When someone needed a flag looked at, specific tools or otherwise, we were able to provide this in a timely manner and therefore submit a few more flags through pivoting.
We made sure to keep a record of the flags before submitting them (flag, screenshot & intel value). This meant that as we rotated the research between the team, the person taking over the research could quickly assess the situation and continue digging with full grasp of what the team had. This is a big step forward from the last CTF, where this was missing, and led to a lot of research being done twice or thrice.
A big step forward as part of the preparation was establishing a workflow, or baseline of what information was to be submitted within the TraceLabs guidelines. I wish the CTF platform would make it such that each submitted flag would not just be within the category, but also directly which sub-element it belongs to, as that would certainly make sharing flags a lot easier (and possibly provide more value for less effort on the analytics side), but that is wishful thinking. This workflow consisted of establishing a baseline with the categories, and when the CTF started all we had to do was duplicate it for each open case. As mentionned, we could rotate cases between us and the entire element was available.
Finally, time keeping. We tried to keep rotations to 45-75 minutes. A big element of the rotations was putting the flags on our communications tool, but it also meant research didn’t need to be repeated. By doing this we avoided going into rabbit holes and instead kept time use to finding valuable information and tunnel vision was almost non-existent.
What didn’t work
We made it top 50, but why not top 20? Experience & lack of time working as a team aside, some frictions are bound to arise which lead to time being wasted. In my case a big time waster was a lack of structure & experience with the tools. OSINT != tools; however there are some tools that make it a lot easier to extract value from single pices of data. Some of these will provide analytics or additional information on social media, others may provide enumeration.
A major time sink was Maigret. I had previously downloaded Maigret to a development machine & tested it there. Because it provides a far more complete report than Sherlock or What’s My Name I set it up on my OSINT VM, same as in the development machine… Except when it came time to search an username, it didn’t want to run. I tried setting it up again with pip, as root and as not-root, but after 5 minutes it was not worth sinking more valuable time into.
The other time sink was the internet bookmarks. It’s very easy to get a list of bookmarks for social media analytics or specific search engines / automated processes. It also takes time to curate these and verify how they work. I’d used the OsintCombine bookmarks during the last CTF, but this time round several sites had changed in their interface & processes. The lack of experience with this made it difficult to move forward with these as tools to pivot from obtained data points.
The second issue I had was with my web shortcuts. It’s very nice to have lots of shortcuts at my disposal to pivot from data sources. However, if these are not maintained & the process for these regularly tested, they also becomes a time-sink while looking for one which provide the wanted service.
Finally, one thing I found was missing was a checklist-style procedure as the initial enumeration. One possibility might be using bookmarklets to provide the initial research and then enumerate from there. This muscle-memory style procedure is an element I will be working on for the next CTF.
Whats in the future?
Most likely I will be participating in yet another CTF after which I’ll be applying to judge for a couple CTFs. This will help with the learning experience & there seems to be a massive lack of judges in comparison to the number of participants.
I’m also definitely going to be sorting out Maigret on the OsintVM & absolutely setting a baseline 1 week before the event whereby I can check the tools & bookmarks before the event.
And if you want to participate?
You don’t need any special knowledge or experience to participate in these CTFs. The great advatange of the TraceLabs CTFs is that your own personal experience provides an advantage in the research process. Perhaps you speak a language that provides you an additional edge, or perhaps you happen to be knowledgeable in certain niche subjects which provide some additional insight into what information can be extracted from data points.
More details https://www.tracelabs.org/