From Pegasus to Predator – The evolution of commercial spyware on iOS [video]

https://media.ccc.de/v/38c3-from-pegasus-to-predator-the-evolution-of-commercial-spyware-on-ios

By cookiengineer at

saagarjha | 7 comments | 4 days ago
This is a good overview of the public commercial spyware landscape on iOS over the years, including attributions to several of the high profile players in this space. Unfortunately, the rest of the talk is a little depressing. You'll note that I have been using words like "public" and "high profile". Despite these cases coming to light, the actual market is far broader than what was discussed here. Some of the exploits presented were not able to be conclusively tied to a specific entity or operator. Many attacks go entirely undetected.

The efforts in this space by defensive organizations are laudable, but very, very immature. There's this meme that has crossed over into the software space of the planes the come back with a lot of holes in them, indicating the regions where extra armor plating is actually the least important. The commercial spyware industry is a lot like that. Those stories you see of people finding exploits via crash logs and iOS databases? That's the lowest hanging fruit. People who know what they are doing are not leaving traces there. And pretty soon those who don't will stop dropping things there too. It's really, really important to understand that the detections well that these people are sipping from will dry up very soon. The proposed solutions from the talk are not nearly enough to help. Some of the things they're asking for (process lists, for example) are already exposed, but we're currently in the Stone Age of iPhone forensics on the defensive side. Those on offense, who are incentivized by money but also now by necessity, will far outstrip any attempts to catch them after-the-fact :(

cookiengineer | 1 comment | 4 days ago
I am currently trying to combine my EDR agent that I wrote over the last 2 years for POSIX systems in Go + eBPF and the Hypathia project [1] which actually was very promising but is now inactive because the author gave up.

So far the approach still seems promising, but I would need more devs to help me as I'm contributing in my free time and I won't accept funding for my cyber security related projects, ever.

Would be nice if some other folks feel the same way as you and we can revive the Hypathia project to be better in the sense of eBPF process analysis, in-memory modification detection, and network analysis via XDP.

[1] https://github.com/Divested-Mobile/Hypatia

saagarjha | 1 comment | 4 days ago
I don't have the time, sorry. Too much on my plate! But I (and I apologize in advance for this) can tell you that one of the reasons why I would not have that much time for this is that I don't think it is fundamentally interesting in the face of a sophisticated adversary. Scanning files and memory or whatever is largely irrelevant in the age of exploits that completely compromise the device, all the way to a privilege level higher than where the actual scanner operates. Signatures fall apart if it is very cheap to evade them (and it is, with trivial modifications to payloads). Typical approaches to catching malware do not apply to zero-day attacks. They may sometimes work but my point in the comment above was to point out that this is just luck rather than a sustainable practice. Someone who knows you are looking for them can hide and lie far harder than you can possibly imagine. And if they've broken the system, they can use those very protections that were supposed to keep them out to prevent you from going after them. Kind of like how a castle is designed to prevent people from storming it, up until they actually sneak in and all those defensive measures stop you from retaking it ;)
cookiengineer | 1 comment | 4 days ago
But the issues you're describing are literally where the eBPF kernel module chimes in and what the process analysis is about, no?

You can detect a lot of malicious behaviour this way, where programs and processes deviate from their usual behaviour; e.g. trying to access files they're not supposed to.

saagarjha | 1 comment | 4 days ago
Someone with a kernel-level exploit can completely neuter your eBPF detection. They can make it never return data, or return bogus/benign data. You can try to catch them in that lie, but it's really hard, and even if you do they can just stop your process from being able to report on it. Again, a real-life analogy might help: it is really, really hard to protect yourself against a criminal who has the cops on their side. You're definitely not going to be allowed to go into the crime scene yourself, so you have to trust what they're telling you. All your complaints are going through them. If a dirty cop doesn't like you your life can get a whole lot worse!
cookiengineer | 2 comments | 4 days ago
The point of eBPF is that eBPF receives and processes data _before_ the kernel does.
saagarjha | 1 comment | 4 days ago
That's not really the case. The kernel-much like a police force-is not a monolithic entity, there are various parts of the kernel that get information before others. eBPF does let you "hook" various parts of the kernel, so that you can get an "exec" event from before the exec actually happens (you can't stop it, though, so even this is somewhat dubious). But someone in the kernel can intercept the hook itself, or uninstall it completely. In the police analogy even if you have a friend in the force that you know is good, they are still part of the police. Even if the first thing they do when they get information is share it with you, there's no guarantee the dirty cop isn't sitting in the mail room ready to shred things before they get it.
stefan_ | 1 comment | 4 days ago
Thats just not true? Your eBPF programs run in a VM in the kernel, occasionally JITed, but of course the kernel is free to feed them whatever data it wants.
cookiengineer | 2 comments | 4 days ago
Where does your hypothesis come from? Any architecture chart will point out that XDP will be processed before all other network modules. If XDP is not offloaded, the driver will always process the XDP hooks before the rest of the network stack is called.

In your assumed concept, how else would offloading to a NIC that does not run a kernel work?

h4ck_th3_pl4n3t | 0 comments | 4 days ago
I think there is a misunderstanding of the two perspectives.

Grandparent's assumption is that the kernel is compromised.

Your assumption is that you can detect malicious behaviour before it happens (and before the kernel is compromised).

tptacek | 1 comment | 3 days ago
XDP receives packets before the network stack does, but not before the kernel; in almost all cases, it's just a hook to process packets off the DMA buffer. None of this matters; the kernel controls XDP; not only that, but there's nothing an XDP program can do without rendezvousing back through the kernel.
cookiengineer | 1 comment | 3 days ago
> XDP receives packets before the network stack does, but not before the kernel; in almost all cases, it's just a hook to process packets off the DMA buffer.

If the kernel really processes and parses the data packet _before_ eBPF and XDP can then you could exploit the kernel via single data packets. That's the context of the discussion, still. Meaning that in the hypothetical scenario that you found a programming error in the kernel code regarding the parsing of network packets.

Note: Parsing is not the same as copying, and I used the word parsing specifically on purpose here.

If the kernel does not process or parse the network packet other than sending the pointer to the previously copied buffer to an eBPF program - then that means a malicious packet can be blocked before anything else in the network stack is affected, right?

So, what do you think happens when I decide to write an eBPF/XDP program that blocks e.g. all TCP packets?

A) The network stack receives the packet

B) The network stack does not receive the packet

If your answer is A, we have a different definition of what you describe as the term "network stack".

To me, the network stack is everything that comes _after_ XDP passthrough. And that's outside the influence of my userspace/kernelspace program that tries to protect the system.

Also XDP is the earliest position in the kernel architecture to detect/validate/block malicious network packets. Because let's be real: I am never gonna be able to get anything merged in the kernel driver code of my network cards.

tptacek | 1 comment | 3 days ago
I feel pretty comfortable with how XDP works, since I built a CDN forwarding path on it, on multiple different drivers, which was its own special fun. No, I don't think you're right that XDP gives you a fighting chance against CPL0 implants.
cookiengineer | 1 comment | 3 days ago
> No, I don't think you're right that XDP gives you a fighting chance against CPL0 implants.

Detecting a network packet and detecting a rootkit are two very different things.

tptacek | 1 comment | 2 days ago
Have you written any eBPF code? People who haven't before might tend to think it's possible to do things in solely in eBPF that are not in reality possible. Have you written an XDP program before? XDP is even more limited than eBPF generally (it has almost no helpers exposed). No, you're not going to use XDP to detect or defend against a CPL0 exploit.

That's before we get to the more fundamental issue with the strategy, which is "what network packets would you even be looking for". The ones that say "CPL0 exploit"?

(Fun fact: literally looking for a packet that says "CPL0 exploit"? Super annoying to do in eBPF. No loops!)

cookiengineer | 1 comment | 2 days ago
> Have you written any eBPF code?

Yes. [1] I also understand its limitations, e.g. not being able to do DNS compression due to its linearity and the bpf verifier only allowing statically inlined helper functions etc.

I think in general there is a misconception about what I was talking about. Maybe I was too unclear, dunno. I am aware that kernel self-checks cannot be implemented in the kernel itself. That is what I wanted to point out in my previous comment.

I was always talking about whether or not it's possible to protect the kernel from receiving known malicious network packets that could cause an RCE. And I think it is possible.

[1] https://github.com/tholian-network/firewall/blob/master/ebpf...

tptacek | 0 comments | 2 days ago
It's not just "you can't do DNS compression", or "you probably can't do general-case string comparisons". It's much more fundamentally that anything you "detect" in eBPF code, even in the extremely rare cases where it's offloaded into NIC chipset, has to get plugged right back into the kernel to do anything with that data. You can't write a general-purpose eBPF program; eBPF is just an telemetry and packet processing offload.

That eBPF firewall is a perfect example of what I'm talking about. It relies not just on the kernel but on a cooperating userland process to do all the "interesting" bits.

momento | 1 comment | 4 days ago
The "meme" you refer to is simply survivorship bias: https://en.wikipedia.org/wiki/Survivorship_bias
saagarjha | 0 comments | 4 days ago
Yep. The plane image itself (in the article) is a common meme though
ignoramous | 2 comments | 4 days ago
Thanks.

> ...we're currently in the Stone Age of iPhone forensics on the defensive side.

Since I've seen your comments show a pretty good understanding of AOSP/Android, what's your take on its posture against CSVs? Especially given that Google has been pursuing both legal [0] & technical defenses (at every level of the software stack) against them quite actively.

[0] Ex: https://www.centerforcybersecuritypolicy.org/hacking-policy-...

saagarjha | 1 comment | 4 days ago
I don't know much about the legal stuff you linked but I am generally supportive of most of the things Google is doing to harden Android against CSVs. If you have specific mitigations or policies you were thinking of I can tell you what I think of those (not all of them are necessarily positive) but on the whole the OS has been getting more difficult to hack and I view this as a good thing.
ignoramous | 1 comment | 3 days ago
> specific mitigations or policies

- How much impact will moving sharedlibs (mediaserver, for example) / runtime / libcore to Rust will bring? And if all libs will need to be moved? Or, are the likes of memory tagging, sanitizers, hardened allocators (Scudo in bionic / Arena/dl/Ros in ART) enough of a defense?

- Now that Android devices have as much compute/RAM as servers do (and fast battery charging is almost ubiquitous), do you see VM sandboxing apps (like in ChromeOS does with crostini) become a thing?

- Believe the drivers (Binder at one stage, GPUs, of late) remain a source of exploits; do you see a microkernel like Zircon being folded into the Android Kernel? Longer-term, will moving away from Linux (but maintaining compat via emulation, say) become a necessity to combat CSVs?

- I see a bunch of eBPF use since Android 12+. Do you foresee Google providing more APIs to aid forensics / monitoring (like Knox/EMM) without needing root (or abusing Accessibility/VPN/DeviceAdmin/ADB Shell/etc).

Thanks.

saagarjha | 0 comments | 3 days ago
> How much impact will moving sharedlibs (mediaserver, for example) / runtime / libcore to Rust will bring? And if all libs will need to be moved? - Or will the likes of memory tagging, sanitizers, hardened allocators (Scudo in bionic / Arena/dl/Ros in ART) enough of a defense?

I'm not an expert on specific attacks against allocators, but my general rule of thumb based on what they describe themselves as is that this helps but does not obviate heap memory corruption from being the source of exploitable vulnerabilities. So I would say that moving to Rust would still be useful.

> Now that Android devices have as much computer/RAM as servers do (and fast battery charging is almost ubiquitous), do you see VM sandboxing apps (like in ChromeOS does with crostini) become a thing?

So Android has a thing called pKVM that was designed, as far as I can tell, to run secret ML models and DRM. When I left they seemed to be looking for more pleasant clients, so it seems reasonable that they one day actually work to put security-critical services into VMs. But the overhead is quite high so I assume there will need to be a lot of work put into this if they want it to be practical.

> Believe the drivers (Binder at one stage, GPUs, of late) remain a source of exploits; do you see a microkernel like Zircon being folded into the Android Kernel? Longer-term, will moving away from Linux (but maintaining compat via emulation, say) become a necessity to combat CSVs?

I'm not entirely sure if this is possible, to be honest. Drivers on Android have been a pain point for a while. Google has much more control over their own hardware, of course, but for random other OEMs what typically happens is their drivers are binary blobs that rarely get updated. Making improvements in this area is a major effort.

I think, in the long term, that you can't just go "microkernel" to the problem of drivers, because some hardware is always going to have broad access for performance reasons. You can stick an IOMMU between things but some hardware (e.g. graphics) usually bypasses that and other hardware (e.g. storage, flash ROM) can compromise the entire device if tampered with. So I expect to see greater integration in the stack to try to secure these. Some of this may involve userspace drivers, but some might be more specialized to protect against more specific attacks.

> I see a bunch of eBPF use since Android 12+. Do you see Google actively work to provide more APIs for forensics / monitoring (like Knox/EMM) without needing root (or abusing Accessibility/VPN/DeviceAdmin/ADB Shell/etc).

So I don't think Google will give you arbitrary eBPF just because eBPF gets exploited a lot, so letting apps upload arbitrary programs is probably too spooky for them. More generally though they are interested in this space but it's very difficult to provide good APIs, because a lot of the people in this space are selling borderline-scam EDR, and providing the things they want just lets people make spyware easier. I have no idea what is next but I can say that when I was there the things I was creating signals that we felt were very costly to bypass. Unfortunately this is very, very difficult, and the difficulty only goes up as you attribute more capabilities to an attacker.

pxeger1 | 2 comments | 4 days ago
What does CSV mean?
saagarjha | 0 comments | 4 days ago
Commercial Spyware Vendor
technol0gic | 0 comments | 4 days ago
Comma Separated Values...theyre hyperspamming it with spreadsheets
sylware | 1 comment | 4 days ago
I can see kiddies in my logs, but the real ones, they are already here and watching.

Nowadays, presuming anything else is unreasonable, unless you want to scam somebody into buying a 'security product'.

saagarjha | 1 comment | 4 days ago
I don't actually think this is true; I just think that waiting for nation-states to show up in your logs is the wrong way to go about it.
sylware | 1 comment | 3 days ago
There is no 'security product': you must engage in the permanent tracking of security flaw and intrusion: do heavily trap your kernel/applications, do man-in-the-middle traffic analysis between known traffic and unknown traffic, do not trust off-the-self crypto, etc.

But the basics are not even here: you should not use any compiler, all critical pieces of software should be assembly written with very lean SDK (aka extremely stable machine code), namely without the abuse of macro-preprocessor.

Everything else is just posture.

saagarjha | 1 comment | 3 days ago
I strongly disagree with this assessment
sylware | 0 comments | 3 days ago
Well, this is for the classic ASIC CPU world.

But really critical stuff should go as deep as custom ASIC and/or FPGA.

(A decade ago, I don't recall exactly, but those "security guys" did not even try with classic CPU ASICs, they were going custom _simple_ design on FGPA).

That was a decade ago... nowadays...

faramarz | 2 comments | 4 days ago
That’s an interesting comment.

I have a sidebar question for you: what phone do you use if you are comfortable sharing.

I’m wondering if you are bias towards the walled garden of apple with its perceived security or android or some other.

saagarjha | 1 comment | 4 days ago
I use an iPhone, but that's really more because of personal preference than any particular security posture. I'm not a particularly attractive target for commercial spyware: I'm a guy who likes to post things on the internet, rather than someone with genuine value. I don't interact with and am not in the business of handling exploits. There's not really any reason why you'd want to pick through the details of my private life or silence me. It would be pretty dumb to target me with an exploit, especially considering that I would be more likely than most to find it and burn it. If you have that kind of money to waste, I can think of a lot better ways to spend it than getting my chat messages.

From your question I am guessing that this is a disappointing answer, since you probably wanted me to point to a specific phone and an explanation of why I think it is better. But any honest security professional is incapable of giving you a simple answer. I have a beat-up iPhone 13 mini because I like small phones and Apple is unlikely to make a new one soon. I have Lockdown Mode off because it would make my life more annoying than it needs to be. My threat model does not include sophisticated attackers that would be thwarted by security mitigations present in a new device or paranoid software. Should it be in yours? Well, I can try to help you answer that question. But for these attacks the problem is that 99.99% of people will never be targeted by them. But it's not very easy to tell if you're part of the 0.01% (these are made up numbers, btw). There are a lot of things you can do that can make you more or less attractive–for example, if you're a journalist, or a political activist, you might be more concerned. But what if your cousin you're close to is actually a VP at Google? More difficult to say. If you connect all the dots you can build all sorts of models where you should turn this on, regardless of who you are. But the fact is that security is not free and they almost always come with some sort of tradeoff against usability or cost. You could be mowed down on the street by an assassin tomorrow but that is generally a bad reason to never leave your house or walk everywhere in a kevlar vest.

My general advice for people, taking into account practicality and ease of implementation, is to go with a fairly modern phone of their pick that gets regular security updates, so they're not the subject of much lower-cost attacks that reuse patched vulnerabilities. I know a lot of the people who work on security at Apple and they're smart people who really care about making things that are good. Whether the walled garden accounts for that, or even if I think they always make the right choices…well, I have Opinions on that but that's for another day. They certainly make mistakes, but they also do good work. If you look at Android you'll see similar, with it pulling ahead in some areas and being behind in others. I've done a lot of research on Apple's security story and worked on Android's but I can only really rank them on specific facets rather than as a whole. Really I would say, pick up an iPhone or Pixel, be careful about things that are far more likely to hurt you (like, say, phishing), and otherwise just keep a pulse on this area if it interests you. Otherwise I think you have more than enough in your life to worry about.

newuser2022 | 1 comment | 4 days ago
Considering security updates, do you think iOS has advantage in speed? Apple’s usually to roll out security updates to all supported iPhones —often for five or six years— nearly instantly, including critical zero-day fixes, which can be deployed overnight. In comparison, while Pixel devices get immediate updates(but it's only available in a handful of countries), Android devices from other manufacturers depend on their update schedules, which can be slow and inconsistent and often ends after about three or four years. Even with top players like Samsung, there are week delays, especially for non-flagship or older models. In your view, does the pace and longevity of Apple’s security updates tip the balance in their favor, or am I just being biased?
saagarjha | 1 comment | 4 days ago
Yes, absolutely (though Apple does not actually ship anything overnight). In fact when I worked on Android one of the frustrations I ran into was the slow pace to roll out security improvements. While Pixel phones get fixes quickly enough the majority of the world is not actually on Pixel devices, so if you want to ship changes you need to get OEMs on board, and then also have users on devices that are still being supported. A lot of the people we covered would simply not get any improvements until they literally bought a new device, in areas of the world with some of the longest lifecycles for those devices.
prirun | 0 comments | 3 days ago
I switched from Android to iOS because Google forced updates to my phone somehow, even though I had internet access disabled. I only used it as a phone: no email, web browsing, etc. My phone (Blu R2) was a few years old, and after the update, all kinds of stuff was broken. For example, zooming a picture would cause the messaging app to crash. So once that update was installed, I had to enable updates continuously to try to get back to a working phone. But instead, things just kept getting worse. I gave up and bought an iPhone XR on eBay for half retail price.

Most HN folks think diversity is a good thing, and I'm not saying it isn't, but it does have its disadvantages. In my case, I could probably buy new Android phones at least 3x more often than iPhones based on cost, but a lot of people (me) don't want to be fiddling with new phones every year or 2. It was apparent to me that Android updates are not tested thoroughly on older phones. I understand that would be hard because there is a huge variety of hardware, but it's a significant downside of Android IMO.

jimmySixDOF | 2 comments | 4 days ago
the suggestion is whatever you do use it should involve a presumption of compromise as the default posture
saagarjha | 2 comments | 4 days ago
I don't think this is a useful model to have, because it's too simple and not actionable. Who is compromising you? What is their cost to doing so? What level of compromise can they achieve? If you just go "you are always hacked" what is your suggestion? That I never touch a computer ever again?
dmbche | 1 comment | 4 days ago
That you treat the computer as compromised.

You need to calculate something? Great, do that.

You need to encrypt files, and keep them on your device which is connected to the internet, and want to trust that you are the only person that can access them? Think twice. Can be considered trivial for many attackers to have full access to your device, and assume ring 0 access. They could realistically record all keypresses and your screen, no need to decrypt anything.

Need to hide things from state actors? Never touch a computer again and go live in a cave somewhere until they find you.

SirHumphrey | 1 comment | 4 days ago
> Need to hide things from state actors? Never touch a computer again and go live in a cave somewhere until they find you.

I always found this kind of thinking to be a bit unhelpful. Because what is an alternative? Paper? Hope you don't live in jurisdiction of the country because search warrant is not a difficult thing to get and even an illegal search is not that hard (even outside of the country).

As with everything - people in IT and IT security vastly underestimate the security of IT infrastructure while overestimating the security of non-IT infrastructure. IMO the use of computers makes you much more vulnerable to broad "we monitor the members public for signs of terrorism" kind of spying, rather than specific targeted state actor attacks - as was shown recently with the whole pager fiasco - there are many others non IT vulnerabilities around.

dmbche | 0 comments | 4 days ago
It might have not been that clear, but the "until they find you" in my original comment is worded that way because it's a question of time rather than probability - they're gonna get you. You can try to make it harder (going in a cave, not touching computers) but, realistically, you're getting caught - if not through IT, through things like the pager attack.

Most people are not worried about state actors having an interest in them, my comment was aiming to clarify that as well.

impossiblefork | 0 comments | 4 days ago
It is actionable. It means you don't use the phone for anything important-- effectively, that you accept that it is useless and that you should use other means of communication.
meisel | 1 comment | 3 days ago
Do you expect that Apple’s bigger security initiatives, like pointer authentication and writing the OS in a memory safe language, will improve the situation?
tptacek | 1 comment | 3 days ago
All these things increase attacker costs. In the current landscape, increasing attacker costs has the effect of shaking out some of the lower-rent players in the market, which may put some targets out of reach of lower-caliber threat actors.

The problem you have over the medium term is that CNE is incredibly cost-effective, so much so that you need something like multiple-order-of-magnitude cost increases to materially change how often it's applied. The alternative to CNE is human intelligence; it competes with literal truck rolls. You can make exploits cost 10x as much and you're not even scraping the costs just in employee benefits for an alternate intelligence program.

What that means is, unless you can foreclose on exploitation altogether, it's unlikely that you're going to disrupt the CNE supply chain for high-caliber state-level threat actors. Today, SOTA CNE stacks are probably available to the top IC/security agencies† of all of the top 100 GNP countries. It probably makes sense to think about countermeasures in terms of changing that to, like, the top 75 or 50 or something.

I think we tend to overestimate how expensive it is for adversarial vendors to keep up with countermeasures. It's difficult at first, but everything is difficult at first; I vividly remember 20-30 extraordinarily smart people struggling back in 1995 to get a proof-of-concept x86 stack overflow working, and when I first saw a sneak preview of ROP exploitation I didn't really even believe it was plausible. As a general rule of thumb I think that by the time you've heard about an exploitation technique, it's broadly integrated into the toolchains of most CNE vendors.

Further, remember that the exploit development techniques and people you've heard about are just the tip of the iceberg; you're mostly just hearing about work done by people who speak fluent English.

Reminder that customers for CNE vendors usually include many different agencies, invoiced separately, in the same governments.

Jesus_piece | 1 comment | 3 days ago
Reminds me of the book “this is how they tell me the world ends” - a history on cyber weapons. It’s written from the perspective of a journalist without a comp sci background but delves deeply into the topic of how cyber weapons are procured, priced, and sold to multiple agencies in the same government. It’s unfortunate these are used for exploits and not for reporting bugs or vulnerabilities. Stockpiling exploits only makes everyone less safe
tptacek | 0 comments | 3 days ago
I mean, I agree, but also it doesn't matter than I agree, because wanting everyone to be altruistic --- no, to share the same notions of "altruism" that we do, since quite a few hypercapable exploit developers don't agree that their home states shouldn't have access to whatever signals intelligence they want --- won't make it so.
Hilift | 2 comments | 4 days ago
What you describe is the artifact of an ecosystem where the consumer is a second class citizen. These exploits don't work on a desktop or notebook precisely due to that ecosystem is obtuse and pretty much the opposite of an extensible platform.
saagarjha | 1 comment | 4 days ago
These exploits work perfectly well on those platforms. The reason you hear about people targeting them less is that they are easier to target with less sophisticated attacks and also not as valuable to attackers. In the cases they are your Android "root" exploit becomes a Linux LPE super fast.
jacooper | 1 comment | 3 days ago
I think the difference is desktop systems are way more transparent about what's going on in the OS compared mobile OS's which behave closer to a blackbox.
saagarjha | 0 comments | 3 days ago
Sure, but my point above is that even with more transparency you won't catch people who are good at hiding.
dagmx | 0 comments | 3 days ago
Are you seriously saying there aren’t exploits on non mobile platforms?

The platforms that have famously had many significant exploits over the years, and are the cause of many major data exfiltration operations?

Are you pretending that viruses and worms don’t exist? Why does forwarding through we have things like windows defender or anti viruses then?

mu53 | 2 comments | 4 days ago
People really underestimate the damage these tools are doing to society.

Who can afford these tools? What lengths have people gone to earn/keep large sums of money? What problems are society going through right now?

Its just stealing your data, which doesn't seem bad. But now, someone who probably doesn't like you has your location, habits, friends, future events. There are so many things that these people can do to interrupt the lives of journalists, activists, and just regular people with stalkers, and all of those things are covert because "How is your ex-girlfriend's friend supposed to know you made a bumble profile 2 days ago, find it, and match with you?"

alecco | 0 comments | 4 days ago
> People really underestimate the damage these tools are doing to society.

Even when heads of state are being extorted. Morocco used it against France and Spain. It fizzled out of the news cycle and nothing happened. And those countries later announced multi-billion Euro investments in Morocco. If anything, this is a signal hiring Pegasus is very profitable and they can do whatever they want.

https://en.wikipedia.org/wiki/Pegasus_(spyware)#By_country

tptacek | 0 comments | 3 days ago
Who can afford these tools? The IC and security agencies of every country in the world with a GNI greater than, say, Bahrain†. So: probably like 300-500 different global threat actors (countries like the US have dozens of capable agencies; I assume Bahrain has 1-2).

I picked Bahrain because they're the smallest country we know for a fact has been a customer of multiple CNE vendors, but that probably means Bahrain plus the next 20-30 countries down the list.

hssuser | 1 comment | 4 days ago
Just finished reading pegasus today. Fascinating how this will keep evolving, keep getting worse and we might as well act as if we are getting surveilled. Too much incentive to not have adversarial actors. Link for the lazy - not an affiliate link - https://www.amazon.com/Pegasus-Threatens-Privacy-Dignity-Dem...
tptacek | 2 comments | 4 days ago
The fact that you've heard about it --- heck, the fact that there's a book about it --- should tip you off to the idea that Pegasus is not the SOTA implant. There's a whole marketplace of companies providing these services, both exploit chains and implant stacks, and most of then are firms you've never heard of before.
cylemons | 4 comments | 4 days ago
Dumb question, what is SOTA implant?
tptacek | 0 comments | 4 days ago
An "implant" is like a rootkit; it's all the things you do with a compromise once your exploit chain pays off, and threat actors generally have standardized implant stacks.

"SOTA" is just an abbreviation for "state of the art".

reaperman | 2 comments | 4 days ago
SOTA = “State of the art”

“Implant” would be like any remotely installable persistent exploit that grants access to an attacker over a period of time.

Also, I’m pretty luddite when it comes to highly-hyped AI stuff, (in spite of my income being heavily tied to developing AI models) but I have found ChatGPT to be shockingly good at explaining super niche terminology and even jokes. So I do recommend people feel comfortable turning to that if they ever feel uncomfortable asking “dumb” questions publicly.

throwup238 | 0 comments | 4 days ago
@simonw made a custom GPT called the dejargonizer just for that purpose: https://chatgpt.com/g/g-3V1JcLD92-dejargonizer
pockmarked19 | 3 comments | 4 days ago
Or you could just Google it. [0]

That's right. People can just Google things.

[0] https://i.imgur.com/1Yx0m1U.png

cylemons | 0 comments | 4 days ago
I googled "SOTA implant" and got something totally different.
maeil | 1 comment | 4 days ago
Bit of a tangent, but..

Google has been going down hill for many years but since the December update a few weeks ago it has genuinely become atrocious.

In their quest to combat AI slop (good idea), they've gone and made domain authority so much more important than the content, that now when you search for A B C, you get 20 pages from very "authoritive" sites that are about A, are slighyly about B and don't even mention C. This is despite plenty of great pages about A B C existing and serving the content we're looking for - we just never get to see them because the places they're hosted on aren't "authoritive" enough. Before, you'd get 5 pages, 1 of which likely had what you were looking for, and maybe 1-2 were AI slop. Now zero of them are what you're looking for, but at least we no longer have the (generally very obvious) slop? Brilliant improvement for the users..

The reason behind this is pretty obvious: most AI slop that had been ranking well likely had 0 ad spend, meanwhile the "authoritive" sites tend to have high ad spend. Ads was seeing numbers go down and unhappy customers, and they run the company.

layer8 | 1 comment | 3 days ago
Using verbatim search generally improves the results.
maeil | 1 comment | 3 days ago
When possible, sure, but this is often not viable. Just to give an example, looking for information on a local performance or exhibition. I can go and dubquote the name of it, but that still gives me 20 "authoritive" websites with vague info on last year's edition, not the few smaller local blogs that have info on this year's edition. Even if I add e.g. "2024". This got far worse since the December update, and many times there's no reasonable way to craft an arcane search query that fixes it.
layer8 | 1 comment | 3 days ago
I see. There’s also “after:2023”, but that only works if the pages with last year’s info don’t appear newer to Google. Personally I haven’t run into the issue you describe yet to a degree that I would have noticed, but we also may have different use cases for googling. Conversely I rather have the issue for certain search terms that Google shows me a page of shopping results before getting to the “authoritive” websites.
maeil | 0 comments | 23 hours ago
I'm sure locale matters. If you're in NYC, there's bound to be authoritive websites with the content you're looking for about almost anything you could possibly want. But the further away you get from the US, the less this is the case.

Though even in the US it largely holds for niche things. It's been a topic on HN for years, how Google has just stopped surfacing small websites with high quality information on a niche topic that can't be found elsewhere, but it's been greatly accelerated since last month.

Are the shopping results you're seeing ranked higher not from authoritive websites (Amazon, Walmart et al)?

gambiting | 0 comments | 4 days ago
Yeah except Google is just so often wrong or pushing crappy SEO results that I honestly think it's worthless nowadays.
js2 | 0 comments | 4 days ago
I don't know either but perhaps "state of the art."
hammock | 2 comments | 4 days ago
You make this comment everywhere Pegasus comes up. Half a dozen times on one submission.[1] Can you name some of the other firms we've never heard of?

[1]https://news.ycombinator.com/item?id=42476828

tptacek | 2 comments | 4 days ago
No.
saagarjha | 0 comments | 4 days ago
Alright then, keep your secrets.
daneel_w | 2 comments | 4 days ago
So it's either because you, too, have never heard of them, or because you're obliged not to. Which one is it? Are you making an educated guess about their presence?
saagarjha | 1 comment | 3 days ago
No he knows what they are he's just being annoying
daneel_w | 1 comment | 3 days ago
My questions were rhetorical ;) I've commented on his "patterns" previously, in particular whenever Signal's lack of anonymity is the topic. Apparently it's an offense so one must watch the way they phrase their responses to such statements.
tptacek | 1 comment | 3 days ago
I have no idea how to parse this, but if you thought I was going to give you a list of all the CNE vendors I'm aware of on an HN thread, obviously, no. Why would you care anyways? I know enough to know that I'm speaking factually about the state of the market, but I don't work in it or interact with it in any meaningful way, so you could just as easily say "if you know about that vendor, that means they're not a SOTA CNE vendor either". You might be right!

On this leg of the thread, we're considering basically one issue: is NSO Group one of the {only,most} {important,impactful,sophisticated,whatever} CNE vendors. Is someone seriously arguing that's the case? I'd assume the idea that there are lots of vendors more impactful would be pretty banal, but maybe there really are people on this thread whose understanding of CNE comes entirely from that book linked upthread?

hammock | 1 comment | 3 days ago
What is the purpose of your original comment? Do you disagree with one of the parent's assertions that (this will keep evolving) (keep getting worse) (we might as well act as if we are getting surveilled) (Too much incentive to not have adversarial actors)? It doesn't seem as if you do, yet some would interpret your tone as argumentative, or unsubstantiated alarmism
tptacek | 1 comment | 3 days ago
I would sum my original comment up as "NSO doesn't matter". It's an interesting CCC talk. It's worth digging into what NSO implants do. There's not much bigger-picture stuff to pull out of it.

By all means, sue them, sanction them, proscribe them, whatever it is you want to do to make NSO less profitable, I'm fine with it. But don't pretend that's solving the broad social problem of CNE operations. Everybody does it, and most people don't need NSO to do it; they have other, better vendors to work with.

hammock | 1 comment | 3 days ago
OK that's helpful thanks. I'm curious, is CNE illegal?
tptacek | 0 comments | 3 days ago
It depends on where you are, but generally being a CNE vendor isn't, so long as you're not selling to criminal organizations. If you're doing enough KYC to be reasonably sure you're selling exclusively to agencies of governments your home state doesn't have export controls for, you're probably fine.

Actually conducting operations, totally different story.

vincnetas | 0 comments | 4 days ago
how do you know that? Or are you using any tools for this insight?
VagabundoP | 2 comments | 4 days ago
The only way this is going to change would be state/megastate level action.

Make selling/using these attacks against government or other users a terrorist level event. Go after the heads of NSO and their like.

I'd say at that point the companies would be absorbed into the national intelligence infrastructure of the host county and cease to be independent entities who can be bought for the highest bidder. And I know NSO is basically like that now, but

I'd love to see some criminal sanctions for things that their software has been used for stick.

max_ | 2 comments | 4 days ago
These goons that deal in the spy ware market are actually under the auspices of state.

The state is rotten to the core.

I don't even blame them. The real problem is the lack of philosophy and ethical standards in the tech industry.

Computer Technology is so shallow. Apple for example talks about being a proponent of privacy and at the same time the M1 Computers have built-in terrible spyware that cannot be removed (Apple made sure of this).

Every time I talk about this I am labelled as paranoid or sometimes "stupid". Alot of people simply rationalize this built in spying as "good".

The bitter truth is that we made our bed. Now we have to sleep in it.

TheJoeMan | 1 comment | 4 days ago
Perhaps it's time to establish an actual Professional Engineer board for "software engineers". This could start with the most safety critical systems, embedded life support code, etc. You then get the other engineering codes/standards to require board-certified programmers for these "critical" systems, and that drives the wedge of larger companies being "forced" to hire engineers who are bound to an ethical discipline. They then would have grounds to stand on for pushing back on shady systems.
tptacek | 0 comments | 3 days ago
And this is going to do exactly what to suppress CNE vendors? You don't even know who they are, and many of them operate entirely in jurisdictions that won't care even a tiny bit about professional licensure.
Infernal | 1 comment | 4 days ago
“M1 Computers have built-in terrible spyware that cannot be removed (Apple made sure of this).”

Can you say more about this?

max_ | 1 comment | 4 days ago
pxmpxm | 2 comments | 3 days ago
Is there a non-schizophrenic version of this article? Nearly impossible to read.
talldayo | 1 comment | 3 days ago
It's hardly schizophrenic, unless you're suffering from the cognitive dissonance of assuming Apple cares about privacy.

But sure, here's a version written by a well-known Apple toady explaining in-detail why this is bad and criticism is warranted: https://eclecticlight.co/2021/08/12/is-apple-keeping-its-pro...

mcculley | 2 comments | 3 days ago
What makes Howard Oakley a “toady”?
talldayo | 1 comment | 3 days ago
What makes Sneak a "schizophrenic"?
mcculley | 0 comments | 3 days ago
I did not claim anything about sneak. I think pxmpxm was trying to say something about the typography or layout of sneak's article, not something about sneak.

Does Oakley writing about Apple products make him an "Apple toady" in your opinion? Or is there something he has written that is apologetic of Apple's behavior? I am asking a genuine question here. If you have no serious answer, that is understandable. I may have misinterpreted your words to be serious.

tptacek | 1 comment | 3 days ago
Virtually every state in the world is a customer of a firm that sells exploit chains and implant stacks, so, no, this isn't going to happen.
VagabundoP | 0 comments | 3 days ago
Yeah we're not going to stop the state level intelligence services from using these. I'm more concerned about locking out and crimilising the non-state actors and holding the companies libel for their actions.

I think there could be some movement here, but there is certainly a level of protection that national governments are doing for these companies because they want their services.

sneak | 4 comments | 4 days ago
If you are a presenter, please please please please stop it with the “put up slide, read bullet points off the slide, repeat” format. It’s excruciating.
seanhunter | 0 comments | 4 days ago
There’s an excellent essay about how Powerpoint encourages this style and how bad it is for everyone by Edward Tufte called “The Cognative Style of Powerpoint” https://www.inf.ed.ac.uk/teaching/courses/pi/2016_2017/phil/...
lnsru | 0 comments | 4 days ago
That’s probably how 80% of presentations I attended last couple years were presented. Open slide, read sentences written there, go to next slide. Not nice.
layer8 | 0 comments | 3 days ago
You generally need to practice a talk a lot to be able to free yourself from the slides, especially if you don’t often do talks. For a one-off talk it’s not always realistic.
darknavi | 5 comments | 4 days ago
Fascinating video with terrible audio for some reason. It made it hard to watch the video. It fixes its self a few minutes in at least.
IYasha | 0 comments | 4 days ago
Fortunately, they are aware of the problem and made an announcement:

>> We are aware of audio issues, especially during talks of day 1 (2024-12-27). Some talks have been released in a preview-version, but are still being worked on behind the scenes.

cbg0 | 0 comments | 4 days ago
Here's a somewhat cleaned-up version of the first 25 minutes (used Adobe Podcasts):

https://pub-e2fd917248b04c518e963d141d588b4c.r2.dev/outputfi...

can16358p | 1 comment | 4 days ago
Yup, the audio was so bad that it started to hurt my ears and had to stop. Would love to watch/listen to a version with fixed audio though!
Syonyk | 1 comment | 4 days ago
Sounds fine to me... ? It's not a native English speaker, but the audio seems entirely standard for conference audio.
darknavi | 1 comment | 4 days ago
For me (English audio) the first ~13 minutes clips pretty hard.
nyclounge | 1 comment | 4 days ago
Likewise here. Even tried the audio only link. It is also choppy, can the organizer upload a better version? It is quite difficult the hear what he is saying sometimes.
IYasha | 0 comments | 4 days ago
Yeah, it's not the first time they have sound problems. Really frustrating, especially for non-native speakers.

I was going to blame wireless mics, but they seem to be fixed?..

r9295 | 1 comment | 4 days ago
An idea I that I considered implementing was to instrument parser libraries (png/pdf etc) with address sanitizer (for iMessage/Chrome/Webkit) and run the instrumented version for 5% of all parsing operations. If we get enough people to use this, exploits may be easier to discover?
saagarjha | 0 comments | 4 days ago
Google and Apple already do this to some extent: https://arxiv.org/html/2311.09394v2/#S5
motohagiography | 0 comments | 3 days ago
i worked on solutions to this problem from the beginning and at a higher level there's a basic economic reason mercenary mobile malware will always be with us, and these spyware companies will almost always be a viable investment.

first hand: it's an artifact of "small coalition" governments, typically funded by resource wealth, and therefore without sophisticated public services that can support a spy agency who would develop their own inline national surveillance and intelligence infrastructure. it means they will always have to go to the commercial or grey market (like these vendors) to get this spying capability in malware, and eventually there will be diplomatic consequences to cutting some of them out with vulnerability patches.

there's another game at play where as iphones become more expensive and high risk to exploit, spyware providers switch away to things like vehicle entertainment systems, home and office AV and automation, and other personal tech. the market is small, but long term persistent. on the defender side, we just have to find a way to manage.

1oooqooq | 1 comment | 4 days ago
fun fact: on Android, limiting 2G is a premium feature.

buy top of the line android like pixel pro: there's a huge toggle switch "allow 2G".

buy a middle or lower end device, no matter from Samsung, Motorola, etc... and for some inexplicable (heh) reason all companies decided that paying an engineer to apply a patch to remove that toggle from stock android was a solid investment :ponderingfaceemoji

you can still disable it with the very user friendly

   *#*#4636#*#*
and then picking any radio preference list that excludes gsm. (edit: hn swallows asterisks)
ugjka | 0 comments | 3 days ago
nothing happens on my Samsung with that code
Syonyk | 8 comments | 4 days ago
If you use iOS: Turn on Lockdown mode. All your devices. Don't look back. Grant exemptions for individual, known/trusted websites/apps if needed to regain functionality that's critical. Even if you have to whitelist a few websites or apps, it's better than having all the interfaces exposed to all the things.

You eliminate a ton of "complicated, probably exploitable things" in spaces known to be commonly exploited. Oddball image formats, the Javascript JIT engine, "complex" messaging (Facetime, MeMojis, that... entire ecosystem of weird-not-text-not-image stuff that Apple does), WebGL, WebRTC, link preview processing (I expect a common 0-click exploit chain is through that system), and probably some other stuff.

The phone/tablet is entirely usable without this stuff. Some websites don't render images properly, "that one guy's website" doesn't do the animations, but you can individually bypass Lockdown mode for sites, apps, etc - and you still get the protections for everything else.

And if you're a web developer or app developer, please. Test your website on an iOS device with Lockdown mode enabled. Pick image formats that render properly, it's not hard. And if your app requires something that isn't supported in Lockdown mode, that's fine - but please show some sort of useful error message that indicates that, perhaps, this crash/glitch/whatever is the result of Lockdown mode, and you can disable it by following these steps. Then, also, don't sell to some random purchaser of apps.

But Lockdown mode really, really helps reduce the attack surface. Try it. You'll like it! And it might just help prevent getting you popped by this sort of crap.

... then install QubesOS on your full computers and don't look back. ;)

jeroenhd | 1 comment | 4 days ago
I don't use iOS often but I find lockdown mode to interfere very little with apps when I've tried it. Seems like a "don't get hacked" toggle that companies and people doing any kind or public researchs should just turn on for their phones.

However, I don't have access to Safari on a dev machine and until Apple fixes that, I'm not testing websites on iOS. Sorry not sorry, but even Microsoft Edge is cross platform these days, if Apple wants independent websites to support their browser (especially their own restricted browser profiles) they need to stop making it exclusive to their hardware.

Seems like a good idea to test against if you're already doing Safari testing but I'm not sure if automated tooling supports the toggle.

saagarjha | 2 comments | 4 days ago
You can run WebKit on Linux if you want
jeroenhd | 0 comments | 3 days ago
I do occasionally, but it didn't take me long to find differences in behaviour and support between Linux and iOS. Entire APIs are left unimplemented on the Linux side and things that work on Linux break on mobile for some reason. Codecs (for image, video, and audio) seem to vary wildly between platforms too.

I'm sure Apple could take Gnome Web and turn it into a cross-platform Safari browser if they wanted to, but so far they haven't (and probably don't want to).

realusername | 1 comment | 4 days ago
Safari mobile has different bugs than WebKit. And even different bugs than desktop Safari itself.

As a web developer, I'm also not bothering to test anything on iOS, it's just so much pain that it's not worth it. You need to buy a dedicated device with a specific iOS version and never update it (since you can't even change the browser version on iOS) and as for the debugging tools, they suck so much that I had to resort to Firebug.js a few times in the past.

Yeah no thanks, I just test on Android and hope it's good enough on iOS.

szundi | 1 comment | 4 days ago
What would your clients react after reading this?
realusername | 0 comments | 4 days ago
Not sure, most end users aren't really aware on how it works on their mobile and it's not like Apple will advertise it either.

Personally I can't really do much about the sad state of the web on iOS myself anyways, I'm not a regulator. The problem goes beyond just the tech side.

aberoham | 2 comments | 4 days ago
AVIF images being automatically disabled by default in Lockdown Mode is painful. That and various automatic family sharing things (such as shared photos or children app install requests) no longer working has made Lockdown a deal breaker in some cases where the user doesn’t appreciate the threat.
kdmtctl | 0 comments | 3 days ago
One shouldn't use a locked down device to auto share pictures and approve children app install requests. If there is no need for a separate device for sensitive data then one possibly is not a person of interest and doesn't need a lockdown mode. It is not possible to have comfort and security at the same time.

And a sensitive device should not be easily discoverable to gatekeep who can actually send anything to it. This is also renders it unusable for day to day family tasks.

daneel_w | 0 comments | 4 days ago
Do you happen to have a full list of what media formats are still working in Messages when in lockdown mode? Does HEIC/HEIF work? (Pardon the question but I just don't have a second iOS device available for testing this myself.)
captn3m0 | 2 comments | 4 days ago
Even large mainstream app developers are not testing against Lockdown mode. Amazon’s app doesn’t load Customer support chat with it enabled for example.

Also, is JIT disabled for alternative browser engines in EU?

jeroenhd | 0 comments | 4 days ago
Nobody has released an alternative browser engine yet, because of the way the app store works (you'd need specific apps you can only install in the EU next to the worldwide version for instance). I'm sure it'll happen eventually, but it doesn't seem to be a priority for browser makers just yet.
saagarjha | 0 comments | 4 days ago
I don't actually think there is official API to check if the device is in Lockdown Mode. But to be clear this is an academic curiosity for now as nobody is actually shipping an alternative browser engine in the EU that is being targeted by a sophisticated attack.
szundi | 1 comment | 4 days ago
Generally Apple introduces features they think people want to use. So enabling anything that takes away networked features will hurt the user experience in practice. So... people won't do that.

I would rather be interested in ways to detect these software phoning home on my home wifi with my firewall - for now. I might change this stance any moment in the future heh.

nwellinghoff | 2 comments | 4 days ago
Why are more people not saying this? At the end of the day malware is only useful if it can send information out. So its by nature, totally detectable.
dagmx | 1 comment | 3 days ago
How would you inspect mobile data when not on your own wifi?

How would you inspect it if it was piggybacking of a trusted but compromised endpoint? What if the data exfiltration doesn’t use a networking protocol you can monitor at all, like Bluetooth beacon transmitting?

The answer to almost any “why are people not saying this” is because it’s usually not that simple.

nwellinghoff | 1 comment | 3 days ago
1) Software defined radio. You basically hook up a IMSI backed by a internet connection.

2) That is a good example. Much harder to execute. I would argue in that case that everything is totally compromised. But if the hardware vendors provided a low level interface where one could read and write firmware etc. directly. One could do simple binary comparison analysis.

The point still stands. Figuring out what malware is doing is hard. Detecting that there is something in your system that wasn't there before shouldn't be hard. If the hardware vendors wanted to provide low level mechanisms to make the process easier. Its totally in the realm of the possible.

E.g. the main responder to this thread makes it seem like a impossible task even for dedicated security defense groups. But with just two mechanisms 1) network analysis 2) low level ability to read and write firmware/persistent storage. Its totally possible and straightforward.

dagmx | 0 comments | 3 days ago
And you’re suggesting that these are things a normal person can setup themselves and regularly use?
fragmede | 0 comments | 4 days ago
Ransomware, a type of malware, just needs to encrypt your files so you can't access it, no network access required. totally detectable after the fact, but by that time it's too late.
pxmpxm | 1 comment | 3 days ago
> might just help prevent getting you popped by this sort of crap

The ratio of people that actually need this mode to people publicly advocating for it approaches zero very quickly. I'm quite sure no state actor will spend $7 figure 0days to get my cat photos.

Syonyk | 1 comment | 3 days ago
My concern isn't so much the high cost super-secret 0-days, as the "about to be useless" 0-days (1-days?) that have just been patched, but the patches are still rolling out to people.

Also, for most people, it's not the cat photos on their phone that are of value. It's the banking credentials, business login 2FA keys, crypto 2FA, email (which allows, for almost all accounts, a password reset), etc.

nxobject | 0 comments | 3 days ago
I agree with that: sadly, the most pressing security risk to any consumer isn’t on my devices, but online services being breached (or disclosing) private information including passwords! Over the last years I’ve gotten data breach notifications from Equifax, AT&T, Ticketmaster, and United Healthcare (via Change.) I think the average informed tech user will benefit more from training (and reminders!) to keep your online information private than, say, telling them to avoid previewing complex file formats.
nxobject | 2 comments | 4 days ago
Thanks for the reminder! However, I’m a little pessimistic about whether Apple will keep Lockdown Mode maintained and updated - I only remember this popping up after Pegasus and Apple sending out waves of notifications to exploited users, and both seemed to be just a one time effort.
saagarjha | 1 comment | 4 days ago
Apple continues to send out exploit notifications and Lockdown Mode continues to grow to include more attack surface. It seems to be actively maintained, as opposed to a lot of other things that Apple has tried.
nxobject | 0 comments | 3 days ago
I’m actually glad to hear that! I guess my underlying concern is that not knowing the full breadth of modern iOS’ attack surface might make me complacent when evaluating whether there are any risks that Lockdown doesn’t cover, and that being constantly notified on updates might somewhat alleviate that.
DrWhax | 0 comments | 4 days ago
Apple has maintained lockdown mode and sent out regular notifications. It's just not announced publicly.
joejoesvk | 1 comment | 4 days ago
why would a regular user opt in for such a downgrade?
Retr0id | 0 comments | 4 days ago
They won't, and they're not expected nor advised to.
amatecha | 0 comments | 3 days ago
The last time I was looking at the documentation page for lockdown mode all I could think was "this is how the phones should be by default"
omegacharlie | 2 comments | 4 days ago
Considering iOS devices are locked down to hell and back and achieving reboot persistence is extremely difficult, how hard is it to extract a sample of a malware payload in memory for purpose of forensics?
bflesch | 1 comment | 4 days ago
AFAIk it's extremely difficult. Even white-hat iOS forensics revolves around (ab)using old exploits in unpatched iPhones in order to access data.
saagarjha | 0 comments | 3 days ago
I don't think this accurately describes the state of iPhone forensics today.
saagarjha | 0 comments | 4 days ago
Quite difficult on production devices
amelius | 2 comments | 4 days ago
Question: my colleague has a Mac with a Timemachine and thinks he is safe for ransomware. Is that, in a practical sense, true?
monai | 1 comment | 4 days ago
Absolutely not. Time Machine is just a SAMBA share with a nice UI on the client side. If the backup directory gets encrypted, all the versions of your files will also be encrypted.
amelius | 1 comment | 4 days ago
There is a different opinion here:

https://discussions.apple.com/thread/8282686

Not sure what to make of it.

Is it possible to reach the server side of the Time Machine from the Mac itself? Has such a breach been demonstrated?

kstrauser | 2 comments | 3 days ago
My Time Machine server doesn’t run an Apple OS. Someone would have to compromise my laptop and then pivot to separately attack my NAS. A state level actor could probably do that. The people running spray-and-pray ransomware ops almost surely couldn’t, or at least wouldn’t bother.
daghamm | 1 comment | 3 days ago
According to Darknet Diaries there are gangs that focus on backup server first, because with backups in place ransomware is not as effective. There are examples of backup software companies being compromised to get to their clients.

This is for attacks against bug companies. But maybe it's just a matter of time before "ordinary" ransomware is updated with destroy-backups function.

amelius | 0 comments | 3 days ago
But to come back to the original question, is there any evidence against Apple Time Machine being secure?
amelius | 0 comments | 3 days ago
Afaik, my colleague has a setup with regular Apple hardware and software.
jaktet | 0 comments | 4 days ago
I don’t know about timemachine but I have some anecdotal experience with Dropbox and ransomware. Essentially one person’s computer was infected which encrypted all the files for everyone in Dropbox. Because Dropbox had versioning on the files I was able to restore all the files back to the point before they were encrypted after removing and wiping the infected machines.

So if timemachine has versioning then maybe then you probably have some options, I’m not sure I’d call this being “safe” from ransomware.

Hilift | 0 comments | 4 days ago
"...but as you can see, there is not a single mitigation that Apple implemented to detect commercial spyware samples on the device"