![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Working in information security means building controls, developing technologies that ensure that sensitive material can only be accessed by people that you trust. It also means categorising people into "trustworthy" and "untrustworthy", and trying to come up with a reasonable way to apply that such that people can do their jobs without all your secrets being available to just anyone in the company who wants to sell them to a competitor. It means ensuring that accounts who you consider to be threats shouldn't be able to do any damage, because if someone compromises an internal account you need to be able to shut them down quickly.
And like pretty much any security control, this can be used for both good and bad. The technologies you develop to monitor users to identify compromised accounts can also be used to compromise legitimate users who management don't like. The infrastructure you build to push updates to users can also be used to push browser extensions that interfere with labour organisation efforts. In many cases there's no technical barrier between something you've developed to flag compromised accounts and the same technology being used to flag users who are unhappy with certain aspects of management.
If you're asked to build technology that lets you make this sort of decision, think about whether that's what you want to be doing. Think about who can compel you to use it in ways other than how it was intended. Consider whether that's something you want on your conscience. And then think about whether you can meet those requirements in a different way. If they can simply compel one junior engineer to alter configuration, that's very different to an implementation that requires sign-offs from multiple senior developers. Make sure that all such policy changes have to be clearly documented, including not just who signed off on it but who asked them to. Build infrastructure that creates a record of who decided to fuck over your coworkers, rather than just blaming whoever committed the config update. The blame trail should never terminate in the person who was told to do something or get fired - the blame trail should clearly indicate who ordered them to do that.
But most importantly: build security features as if they'll be used against you.
And like pretty much any security control, this can be used for both good and bad. The technologies you develop to monitor users to identify compromised accounts can also be used to compromise legitimate users who management don't like. The infrastructure you build to push updates to users can also be used to push browser extensions that interfere with labour organisation efforts. In many cases there's no technical barrier between something you've developed to flag compromised accounts and the same technology being used to flag users who are unhappy with certain aspects of management.
If you're asked to build technology that lets you make this sort of decision, think about whether that's what you want to be doing. Think about who can compel you to use it in ways other than how it was intended. Consider whether that's something you want on your conscience. And then think about whether you can meet those requirements in a different way. If they can simply compel one junior engineer to alter configuration, that's very different to an implementation that requires sign-offs from multiple senior developers. Make sure that all such policy changes have to be clearly documented, including not just who signed off on it but who asked them to. Build infrastructure that creates a record of who decided to fuck over your coworkers, rather than just blaming whoever committed the config update. The blame trail should never terminate in the person who was told to do something or get fired - the blame trail should clearly indicate who ordered them to do that.
But most importantly: build security features as if they'll be used against you.
no subject
Date: 2023-01-23 01:32 pm (UTC)no subject
Date: 2023-01-23 06:27 pm (UTC)We've seen laws used that way this past few decades.
Secure boot?
Date: 2023-01-23 10:37 pm (UTC)Re: Secure boot?
Date: 2023-01-23 10:42 pm (UTC)Re: Secure boot?
Date: 2023-03-31 01:15 pm (UTC)Of course, other architectures now have secure boot as well: Especially ARM. But primary motivation there is quite clear. Locking the user. Well, Microsoft pushing the UEFI abomination willing to go beyond x86 was probably later another motivation.
So UEFI 3 almost waterproof phases that lead to many code commonalities duplicates (on top of those linked with BIOS vendors code, EDK2 reference implementation lead by Intel and Intel/AMD reference-codes, that may also all bring their own compiler toolchain) means an heavy/difficult to maintain patchwork.
For ARM, that's also 3 stages to start "lifting the rocket" before reaching your own boot loader (usually u-boot, thanks god, UEFI remains uncommon). So 3 times the commonalities for boot HW support as well with many restrictions (boot SPI compatibility is a nightmare and every secured zone access must use them through SMC calls) and, let's say, a code quality that would not be allowed if this was not from a third party (same for UEFI).
For what benefit? If you want to sell anything that can go in network infrastructure in the US, for instance, you have top provide all sources+build infrastructure on a server in the USA that can be accessed by the NSA to exactly (let's say the only differences may be build time/info in some binaries!) rebuild all your firmware!
If these guys managed to rebuild & tap Cisco routers FW during the delivery to targeted customers, that's not (only) because they are true genius: The laws are written to ease their work.
On my side, I'm still waiting to see any highly customized boot-loader (for the HW it runs on) hacked if not designed for exposing (back)doors to a Microsoft OS, without everything to build it to allow remaining unnoticed/stable.
no subject
Date: 2023-01-24 01:37 pm (UTC)Thank you Matthew for the SecureBoot support in Linux.
no subject
Date: 2023-01-24 04:23 pm (UTC)no subject
Date: 2023-01-24 11:25 pm (UTC)How is that not an example that the owner of the hardware is running what they want to run?
You don't own the hardware. If you want to own some hardware, put down a few bucks and buy some.
no subject
Date: 2023-01-25 01:12 am (UTC)no subject
Date: 2023-01-25 07:37 am (UTC)Gatekeeping 'undesirables' out of the banking system; patients with chronic 'timewasting' conditions out of access to medical care; excluding developers with dyslexia or sensory deficits from online-testing shortlists for interviews...
All of these things are available for use against the dangerous fools who implemented them.
And don't get me started on the dangers of flagging a company or personal bank account with 'suspected money laundering': the same mechanism, and the same opportunity for malice without review redress or accountability is available e to anyone in the know who wishes to denounce a neighbour for housing employing or being an undocumented immigrant.
I doubt that the systems that do this are competently secured against malicious misuse. Or indeed, intentional 'misuse': the purpose of a system os whatever the system actually does.