Security Archivi - Mario Santella https://www.mariosantella.com/category/security/ Security- OSINT - IT Mon, 08 Dec 2025 10:01:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 How SOC Teams Stop Hackers With a Single API Call https://www.mariosantella.com/how-soc-teams-stop-hackers-with-a-single-api-call/ Thu, 04 Dec 2025 17:41:06 +0000 https://www.mariosantella.com/?p=1983 Let’s talk about security. Everyone’s obsessed with complex zero-day exploits and APTs. But in the real world, the most damaging breaches often come from the dumbest, most obvious holes you can imagine. We found one. It was a doozy. The Scenario: A Ticking Time Bomb Our client? A sharp web agency that manages a portal...

L'articolo How SOC Teams Stop Hackers With a Single API Call proviene da Mario Santella.

]]>
Let’s talk about security. Everyone’s obsessed with complex zero-day exploits and APTs. But in the real world, the most damaging breaches often come from the dumbest, most obvious holes you can imagine. We found one. It was a doozy.


The Scenario: A Ticking Time Bomb

Our client? A sharp web agency that manages a portal handling state-level bureaucracy. Think sensitive personal data, the kind of stuff that makes banks nervous. As the platform grew, so did its digital footprint. And so did the target on its back.

The challenge was simple on the surface: Block malicious traffic.

The reality was a nightmare.

They couldn’t just drop a firewall and call it a day. Why?

Because a huge chunk of their legitimate users-bank employees, financial advisors, you name it-work remotely. They use VPNs, corporate proxies, all sorts of services that give them a “Low Score” or “unclean” IP address.

Blocking these IPs meant blocking their own users. A blanket ban was business suicide. So they were stuck, letting everyone in, hoping their password policies were enough. (they weren’t.)


Phase 1: Strategic Intelligence, Not Blanket Bans

Here’s the core of the problem, stripped down: An IP from a VPN accessing a public page is normal business. The same IP using a valid admin password to get into /admin.php is a red flag.

The risk isn’t the IP itself, but the context of the access.

So, we stopped trying to boil the ocean. We focused on the one place that mattered: the admin login. Our proposal was to apply a zero-trust policy *only* where it was critical.

We integrated the Criminal IP Enterprise API directly into the admin authentication flow. This wasn’t about blocking everyone with a bad IP; it was about making an intelligent decision for privileged access.

Here’s how Criminal IP became the key: it provides a complete risk profile for each IP.

This is CriminalIP web interface, instead we’ll use their APIs for our integration

The risk score is the first, immediate red flag. A low score for an admin login is already highly suspicious. But Criminal IP goes further, providing contextual tags like is_tor, is_proxy, or is_vpn.
A low score or a `is_tor` tag isn’t just a risk; it’s a confirmed attempt to anonymize a privileged action.
This combination of score and context gave us the definitive intelligence needed to act.

We need that data to make a decision. This is a short version of the result of the whole json object returning from a simple IP lookup

Phase 2: The “Intelligent” Control (PHP Prevention)

Real-time security needed to happen at the application layer. No network device could react fast enough or with enough nuance. So we went straight into the PHP login flow.

The logic was brutally effective:

  1. User submits admin credentials.
  2. System checks: Are the username and password correct?
  3. BEFORE granting access, the system makes a single, fast API call to Criminal IP with the user’s IP.
  4. Based on the response (both score and tags), it makes a decision.

The IP check doesn’t just rely on a score; it relies on contextual flags. If an IP, despite a seemingly “Low” risk score, carried an anonymity flag like TOR, admin access was denied.

just a stupid super basic PHP integration example

The ideal flow should create two distinct types of blocks, and the difference is everything.

Block A (The Warning): Login failed (wrong password) from a suspicious IP. Okay, just a bot hammering the door. Log it, move on.

Block B (The Critical Hit): Login blocked despite correct credentials because the IP had a critically low score or was flagged as `is_tor`.

This is the moment everything changes.

This isn’t a guess. This is confirmation: someone has a valid password and is actively trying to breach the system from a high-risk, anonymous location.


Phase 3: The Luxury of Tracking (Wazuh)

Blocking the attack is the first priority. But knowing it happened (and knowing it was a real attempt) is what lets you sleep at night.

This is where Wazuh came in. It wasn’t the bouncer; it was the forensic team and the alarm system rolled into one. We integrated it after the fact to handle the fallout from Block B events.

We created a custom, high-priority rule in Wazuh that did one thing: it listened for the specific log entry generated by a Block B event. It ignored all other noise.

When that rule was triggered, it wasn’t just another alert. It was a Level 12 Critical incident fired straight to the SOC team.
The message was crystal clear:
“We have a confirmed compromised credential. An attacker with the right key just tried to pick the lock from a dark alley. This needs immediate investigation.”

It turned a simple “access denied” message into a full-blown identity compromise incident.

Criminal IP alerts

this is an example of a CriminalIP and Wazuh integration. source: https://wazuh.com/blog/enhancing-threat-intelligence-with-wazuh-and-criminal-ip-integration/


Conclusion: The Lesson Learned

Security isn’t about building higher walls. It’s about building smarter gates.

We didn’t screw over the end-users who needed their VPNs. We let them work in peace. But for the one place that could cause a catastrophe, the admin panel, we intervened.

The success wasn’t just in blocking Tor or low-level VPN. It was in understanding that some IPs are a massive risk factor for privileged access. It’s about giving weight to intelligence.

Sometimes, the most critical security fix isn’t a complex algorithm. It’s just basic common sense, powered by the right data, applied exactly where it hurts the most.

L'articolo How SOC Teams Stop Hackers With a Single API Call proviene da Mario Santella.

]]>
How We Found Potential AI Leaks in 15 Minutes https://www.mariosantella.com/how-we-found-potential-ai-leaks-in-15-minutes/ Wed, 12 Nov 2025 17:01:16 +0000 https://www.mariosantella.com/?p=1918 Your company has an AI usage policy. Great. But a policy on paper doesn’t stop attackers, it only passes audits. Because the real risk isn’t that someone uses DeepSeek or other low-friction AI tools. The real risk is that they use it with a company email, a personal Gmail, or even a “work alias” like...

L'articolo How We Found Potential AI Leaks in 15 Minutes proviene da Mario Santella.

]]>
Your company has an AI usage policy.

Great.

But a policy on paper doesn’t stop attackers, it only passes audits.

Because the real risk isn’t that someone uses DeepSeek or other low-friction AI tools.
The real risk is that they use it with a company email, a personal Gmail, or even a “work alias” like mario.dev.work@gmail.com – paired with a password they haven’t changed since 2021 and no 2FA.

And you won’t know there’s even a potential exposure until it’s too late.

Instead of waiting for evidence of abuse, we did what an attacker would do:
we went looking for our own potential data leaks.
Not by spying on employees.
But by testing our external attack surface with the same data an adversary would use.

Here’s what we found in 15 minutes of focused reconnaissance.


Phase 1: Attackers hunt fresh exposure—not history

An attacker won’t start with 2019 breaches. They want recent infostealer logs, because those credentials are most likely still valid, and tied to active sessions.

We filtered recent breach data (November, 2025 onward) for any record tied to our organization’s naming patterns, whether corporate (`*@company.com`) or personal (`first.last@gmail.com`, `dev.alias@proton.me`, etc.).
No archives. No noise. Just currently exposed credentials. So we used a perfect platoform for that job: DarknetSearch by Kaduu with the leak center tool.

Phase 2: “We rotated that password years ago—what’s the issue?”

Among dozens of recent exposures, most had already been flagged by IAM.
But one stood out: a weak, personal-pattern password.

MRossi1980!

Not random. Not temporary. A classic mnemonic root-name plus birth year.
This isn’t an anomaly. It’s a systemic behavior.

And here’s the catch: this password wasn’t just on a personal account.
It was also found—unchanged—on a company email used to register for third-party services, including AI tools.
Why? Because no one forced a password reset on those external platforms. Your corporate rotation policy doesn’t reach DeepSeek, GitHub, or that SaaS tool signed up for in 2021.

Even worse: we found the same root tied to a “dummy” work account—`m.rossi.dev@gmail.com`—created to bypass corporate SSO and access “quick” dev tools.
These shadow identities are rarely monitored, never rotated, and often packed with sensitive context.

The Lifecycle of a Potential Leak

2021: First appearance

In a historical breach: `m.rossi@company.com : MRossi1980`.
Weak, but contained within a now-retired system.

2022–2024: Surface-level compliance

Corporate accounts got stronger passwords.
But the same user kept using `MRossi1980` everywhere else—on forums, AI tools, cloud IDEs—because those services never asked them to change it.

2025: Risk lives in the shadows

Today, a fresh infostealer log (November 2025) shows:

No 2FA. No corporate visibility. No compromise—yet.

Using public OSINT (professional profiles, code commits, domain history), we confirmed this alias belongs to an active employee—without accessing private data.

I can suggest a lot of tools to do that in this dedicated section of my website: The OSINT Rack

DeepSeek allows email/password login (not just OAuth), and syncs chat history by default.
If the user ever pasted internal code, API keys, or system logic into that chat… it’s now sitting in an exposed account.
We don’t know if they did.
But an attacker wouldn’t wait to find out.

Phase 3: We stopped there – and that’s the point

We never attempted to access the account.
We never will. It’s unnecessary, unethical, and illegal.

Because this isn’t about proving actual data loss.
It’s about identifying potential exposure before it becomes a breach.

The chain is real:

  1. A mnemonic root persists for years.
  2. Corporate policy only enforces change inside the perimeter.
  3. Outside it, that root secures personal emails, dummy aliases, and unsanctioned AI tools.
  4. Those services become silent vaults of risk—until an infostealer opens them.

From Simulation to Mitigation

This isn’t an HR issue.
It’s a security design flaw.

The response should be strategic:

  • Mandate password managers for all accounts—not just corporate ones.
  • Inventory shadow identities: `*@gmail.com`, `*@proton.me`, etc., used for work.
  • Provide approved, sandboxed AI tools—so employees don’t create risky aliases to get work done.
  • Treat external email exposure as identity risk: monitor fresh infostealer data for any address tied to your org’s naming patterns.

So, for the future…

The most dangerous password isn’t the one you use today.
It’s the one you think is dead—but lives on in a Gmail alias, a forgotten SaaS account, or an AI chat history.

Real defense isn’t about hoping your policy works.
It’s about verifying—every day—that it holds up in the real world, where attackers start with a 15-minute search…
and end with your data.

L'articolo How We Found Potential AI Leaks in 15 Minutes proviene da Mario Santella.

]]>