
I work from home in the EU as a freelancer for a US startup.
A few days ago, an email came out of the blue, demanding that I install an "agent" from a company named "Drata"* on my laptop. The motivation is that my client badly want a SOC 2 certification.
I have worked as a de...
A few days ago, an email came out of the blue, demanding that I install an "agent" from a company named "Drata"* on my laptop. The motivation is that my client badly want a SOC 2 certification.
I have worked as a developer for more than 30 years. Tiny shops. Startups. Major league. I have never even heard about someone putting agents on developers laptops.
I'm pretty pissed off. So are the teams I work with.
Is this the new normal now?
Just for the record: I don't have credentials to production systems, and I don't work with production data. I just figure out how to transform dreams into code, I write parts of that code, and then I fix it as needed.
* Drata (https://drata.com/about) is on a "Mission to Help Build Trust Across the Internet". Their business model (in my case) seems to be to take money from companies to spy on their employees/contractors, and then they sell the employees/contractors private information to "targeted advertising". When I confronted them about this, they replied: "Feel free to reach out to your Drata administrator internally with concerns. Do note, that when your company contracted with Drata, any edits or redlines they provided will prevail for all employees of your company." - basically to just bend over and smile.
If you are a freelancer then your contract should allow you to do work for others. In which case, your response to this client has to be "Sorry, but my business laptop potentially has data from other clients on it. I can't let you install this monitoring agent without violating my contractual confidentially agreement with those other clients. I always maintain client confidentiality and will do the same for you. If you want to ship me a dedicated laptop for your engagement, I would be happy to install whatever you want on it."
I think this is the correct advice, but keep in mind that procuring the laptop might be a difficult thing for them to do bureaucratically. On the other hand, you renting a laptop and charging them for it, would be pretty simple, presumably your contract covers expenses and all you'd need is an OK from the manager.
I worked for a big company that had various spyware thingies installed on all the company laptops, but they let you use your personal mobile devices for work (including iPad which was pretty nice) -- and wanted you to install their preferred spyware on it. I didn't do that but I expected they would eventually tell me to do it or stop using my personal iPad.
It seems like now that it's technically feasible, big corporate IT managers want their spyware of choice running everywhere. Someday you will have an arm full of Apple Watches, one for each client. You should embrace this future and price it in.
Arguably, the new laptop being difficult to procure is a feature, not a bug. It serves as a deterrent to installing that agent, if it's easier to just make an exception.
It also creates friction for the client, making it less likely that you get paid.
Wouldn't the source of the friction be them asking to install spyware on your personal machine?
There are multiple points of friction in this scenario. Their asking you to install spyware is friction for you, your asking them to buy a laptop is friction for them.
Charge them for laptop rental. As another poster said. If you have other clients you are betraying their trust by installing the spyware.
Or they reimburse the consultant when they procure their own dedicated laptop for the client's work.
And hey, when the gig is over you got a new laptop that just needs to be formatted and it's all yours!
At my agency we have a client that always ships us locked down laptops (healthcare space so understandable). Thing is this client, while very good at getting the laptops out to you, is horrible at actually getting them back and pretty much lets you keep them....I have 4 Macbook Pro's sitting on a shelf behind me, all from this client.
Are they all still locked to an MDM profile, if so while they're yours, at any point you could lose access to them and that sucks. There's a ton of laptops that have been up on Ebay that ended up having mdm. We bought a few on accident for our non-profit. Fortunately we were able to find the original owners and they were gracious to remove them. We were also lucky they hadn't been disabled. Thing is most people don't realize that when you restore / setup that laptop it will pull that profile regardless of how you wipe or clear it.
TLDR; Make sure MDM profiles are gone and the laptops are cleared before doing anything personal :D
woo... linux and windows don't have that, at least not windows yet. you never know.... they might make uefi behind a subscription making mdm thing over to windows/linux side
Used to work in Healthcare, we'd use Computrace which operated at the motherboard firmware level. It can remotely brick a device and also automatically manage installing additional payloads in Windows once an internet connection was formed.
Doesn't this put you in a position of potential liability, if, say, someone breaks into your home/agency, steals those 4 laptops, and leaks personal healthcare data off them?
Plus the 25% management fee on top of the price :-)
> You should embrace this future and price it in.
Should we though? what you and the parent outline is a most sensible way of accommodating it while minimising invasion of privacy, however I question the underlying reasoning, and therefore whether or not we should encourage it. What perceived gains are to be had beyond merely box checking for accreditations? and in those cases why is it part of those accreditations, what is the intended effect? I can think of a few but they are all flawed or attempting to enforce something impossible:
1. Preventing leaking code/IP (But if you can't trust them they could just as easily take a picture of the screen, capture the HDMI, copy the drive, even log their own keyboard... there are always side channels unless you physically control the environment).
2. Preventing them from doing something malicious... But if they are writing code for you and they are untrustworthy, isn't it already game over?
3. Bean counting, monitoring time spent at keyboard etc - which we all know is not an accurate metric of productivity for cognitive work.
4. Similarly to #1 and #2, unintentional breach or security issues, i.e you trust the person but not their device or their ability to secure their own device - In which case spyware seems wholly inadequate to cope with this situation, if you are serious about this, you should be controlling the hardware and OS (which lots of orgs with highly sensitive info do).
In all these cases spyware is futile. Am I missing something?
Enforcing/auditing sane security settings on the device.
Very much required for compliance, zero trust, protection of IP, and foundational to a reasonable security plan.
I think I added #4 after your comment. Which is essentially my response. It seems like a very weak measure, at the cost of privacy considering it's the worker's personal device... If our solution is to require separate devices anyway, then spyware seems like a waste of time, they should be providing secured hardware/OS.
On second thought, this _is_ the answer... they are making a compromise on security, it's an economic decision. Maybe it makes sense from a business perspective: check some boxes, get a bit of security (not much) for almost nothing - but as you can probably tell, I think it's both pretentious and disrespectful.
The ship has totally sailed on whether it's a best practice to instrument machines employees use to conduct work, in the name of compliance and security. That's an utterly standard control, and unless you have a remarkably potent new argument against doing so, arguing that companies shouldn't do this sort of thing is kind of uninteresting. If anything, the prevailing sentiment (for better or worse, mostly worse) is that companies should be doing more of this, not less.
Yes, it's definitely a economic decision. They're going to run this type of software on their own fleet and want it on everything connecting to the network. If you're willing to run it on your own device that saves them the hardware cost.
That said, a lot of users _want_ to use their own devices (maybe they have better equipment, maybe it's less locked down, maybe they don't want duplicates). It's not sane for the business to allow a device that is more likely to be compromised and/or have poor security hygiene on the network.
I'm a fan of privacy but... At least on my team, we're definitely not spying on you, we're making sure you have a password, encryption, antivirus, and updates installed before you can connect to resources. It's shocking how many people don't have authentication enabled and run as root, if they have a choice, on their home system. That said - we could flip switches and do a lot more spying if it was mandated :/
Why don’t you write an opensource “agent” then, with no remote code execution capability? I doubt people would mind running some opensource bash script that hardens their devices.
Anything but this, and it’s clear you’re just evil.
> procuring the laptop might be a difficult thing for them to do bureaucratically
Not if they are at the point where they need SOC2 cert, and where they install agents on their employees computers (and want to extend that to their suppliers).
As others have said this area of SOC2 is often about "are you actually following the policies you've set" rather than "you have to do things this way" – most of our staff have company owned devices with MDM and an agent which mainly tracks installed applications versions, that Disk Encryption is turned on, and the OS patch level.
But there are a couple of our contractors that rejected this for exactly this reason (they had other clients). For one of them, we just bought him a laptop that he does all our work on (it cost less than 1 day of his time, so it was a no brainer), and the other, we realized we didn't have to as long as he did periodic (documented) reporting of screenshots of his OS version being up to date, Disk Encryption enabled, and screen saver settings are appropriate. And they legally attest that they make a best faith effort to delete any sensitive data off their laptops (if they ever download it).
We've talked to a couple of auditors and that seems to be sufficient and pragmatic as it accomplishes the same goal.
This.
Every company I’ve ever worked at, and that includes very large ones, will have legal, HR, and finance tell you at some point that “you must do X”. Sometimes X is no big deal and you do it. Sometimes it’s hard, and you ask the business to fund it or remove the requirement. Sometimes it’s nonsensical in your context and at that point the job becomes understanding why X is a requirement and how you can satisfy that requirement in some more pragmatic way.
At the end of the day, these functions are there to support the business.
I had an employer that, once or twice a year would send out mandatory agreements we were "required" to sign-- under threat of dismissal. (I don't think this was legal at all).
One day they sent out a particularly onerous "agreement" that said that we agreed not to use a phone while driving a car and doing so would be cause for termination etc.
I went down to HR and asked them if they were really trying to regulate what I was doing in my personal vehicle with my personal phone and they replied "No, its only meant for when you're in a company vehicle or using a company phone."
But the agreement itself clearly stated any phone any car.
The workaround I came up with was this-- a friend of mine and I swapped forms, and signed each others names. HR had their illegal, unenforceable agreement, and life moved on.
I got my "revenge" 6 months or so later. HR was frantically calling me for some reason-- I was stuck on the freeway as is our custom in Orange County. I ignored them for something like two hours, and explained that "I was stuck in traffic and as they were no doubt aware, we are prohibited by company policy from using our phones while operating a vehicle."
The HR gal was visibly pissed off, but to be fair, I could have been fired for answering that phone call.
You're in the US --- California, to boot. I'm not sure what you accomplished by making the agreement "unenforceable", as your employer does not need to secure your agreement to terminate you for virtually any reason. Discovering that you text and drive in your spare time, off hours, is something they'll likely have no trouble firing you for, unless you have an employee contract that somehow gives you tenure except for for-cause hiring (almost nobody has one of those).
Perhaps I didn't explain myself well.
I am not a lawyer, obviously, but what I meant was, threatening someone to sign a legal document can't be legal, even if its your employer.
Sure it can? All sorts of jobs are contingent on signing contracts (NDAs, acceptable use policies, background check authorizations). Why would you think it wouldn't be legal? The "threat" is simply to stop employing you, which your employed (in the US) has an almost absolute right to do anyways.
Where's the line then? What could they "force" me to sign and what couldn't they, in your opinion?
My company is going through our SOC2 audit. We do not have such software and everyone is remote. I call BS as to the justification. This smells like a desire for corporate monitoring.
SOC2 isn’t prescriptive. SOC2 is just a certification that you are following your own internal policies.
If the company made the mistake of creating a policy that they use this software as one of their controls, then the auditor will ding them if they don’t use it.
It’s an absurd system.
Yes and no. SOC2 doesn't say you need to install an agent, and may not be explicitly prescriptive about whether computers that have access to production data or systems need encrypted drives, screenlocks, etc. But a non-hack SOC2 auditor is going to expect you to have some reasonable policy and controls in that area. So yeah, the main thrust of SOC2 is "are you following your own internal policies", but the auditors are also expected to hold you to some minimal standards on your policies (or ask you to provide a good explanation why they shouldn't apply in your case). You definitely would't want to tie yourself to a particular agent in your policy, but the auditors will want to see some kind of policy and then require evidence for that, either from something like an agent or screenshots/etc.
More importantly: once you start using agents as a control, your auditor is very much going to expect you to be consistent about it; that's essentially the core thing SOC2 measures, is consistent enforcement of a documented policy. The whole point is not making random exceptions.
Reminds me when I was doing PCI compliance.
A PCI question asks if all outbound traffic is explicitly authorized. I took that to mean getting a list of all the IPs for the APIs of services we hit, and even constructed that entire list except for one, the payment processor itself.
The payment processor did not have any stable IPs, and could not give me a list. Their official solution was to have our policy be that we explicitly allow _all_ outbound traffic.
If such an option is allowed by PCI, what is even the point of making it a requirement?
As Vendan said, the point is to get explicit acknowledgement that you're aware of something and have either mitigated it or accepted it. Which sounds kind of dumb, like ISO9000 certification, where the joke is "it doesn't matter how bad our processes are as long as we write them down!"
I made that joke to a VP once, and he brightened up and said "Yes! Exactly! Because until you're actually following explicit processes, you don't even know what you're doing wrong, in order to fix it!"
So I'm a lot less cynical about auditing certifications like this now.
The point, with a TON of these certifications/auditing/whatever, is usually "Are you aware of risk X/Y/Z and are you either mitigating it or accepting it?" In this case, you are now aware that all outbound traffic is allowed, and you are accepting that risk as a risk of doing business with that payment processor.
> If such an option is allowed by PCI, what is even the point of making it a requirement?
The point of all those certifications (I took companies through the processes required for PCI, SOC2, and ISO27001 ) is security theater, a path in the back for the execs, the ability to have "I'm not to blame, I have this cert" in case of some shit happening, and the ability for sales to throw TLAs to prospects to show how Seriously(tm) the company takes security. Oh, and to check boxes to be able to transact with some large corporations.
There are plenty of stories of highly certified companies that were deeply penetrated and exposed, and all their security theater did not help.
It's an extremely low bar for cluefulness. There is space between the bar and the ground, but most serious going concerns clear it easily unless they screw up the compliance process and make things hard for themselves.
The problem isn't these low bars, but rather the market for services to "help" people clear them, and the widespread perception that the bars are higher than they actually are.
The same thing happened in the early days of ISO 9000
ISO9000: We make a piece of shit product, but it's a very well documented piece of shit product.
A lot of it comes down to the agency thats doing your audit. Its supposed to be a fair process, but just like PCI compliance, there's a huge amount of variability. Most auditing houses are going to be pushing 'solutions' to problems they find so they can milk the company out of more money. Its all snake oil.
Definitely.
My point is that if/when you get to need a SOC2 certification, you put the resources towards this, and you definitely have the financial/org means to procure hardware to suppliers if required.
You're mostly wrong here. First: many, many companies install agents as one of their SOC2-stated controls. More importantly: depending on where they are in their SOC2 process --- ie, if they've already had their Type 1 --- they may essentially be required to keep instrumenting machines: once you state a control, it's a mess to get rid of it. You can't just decide to make a random exception for a noisy contractor.
My company didn't need endpoint monitoring for SOC2, but does need it for ISO27001
Also as a freelancer you may end up with a farm of a dozen single purpose laptops....
All rented out? Seems like an easy win on the PaaS market (and make enough profit on them).
Just remember to put some customized adhesives telling what laptop is rented to what customer.
Sure but I think most corporate spyware prevents things like mining software, cause get real thats all it will be rented for.
> I think this is the correct advice, but keep in mind that procuring the laptop might be a difficult thing for them to do bureaucratically
That's not OP's problem.
It is if OP loses work because of it.
> Someday you will have an arm full of Apple Watches, one for each client.
Noooo. Armfuls of watches are just for hilarious movies and such. It's not supposed to have a corporate elementttt. The cyberpunk vibes are too much with this.
> If you want to ship me a dedicated laptop for your engagement, I would be happy to install whatever you want on it.
I wouldn't offer this. You're still going to need to login to Github/email/wherever with your personal password, manage private keys, and stuff like that. Just say no.
It's a widespread practice that companies provide laptops to contractors to compartmentalize the way they interact with the company's IT. But I'm really quite opposed to it.
At one point I had 3 sets of machines: Two different 14" laptops from two different clients and my own machines. At some point you simply run out of space on your desk and end up constantly either working on screens that are too small (14" really isn't enough to be productive), or plugging laptops in to and out of screens as you're context-switching. Carrying three laptops with you when you're travelling if you anticipate having to work for both clients during that timeframe is also not exactly my definition of great fun. And you end up duplicating a lot of effort around managing that IT, like tweaking settings the way you like them etc.
The argument "we own this laptop, so we can do with it whatever we want, including spying on you" is just not valid. They're either doing things that I'm okay with, in which case I'm okay doing it on my own hardware. Or they're doing things I'm opposed to, in which case I'm opposed to it no matter who owns the hardware.
Also: In many European countries, authorities are clamping down hard on practices whereby companies pass people off as contractors who really are employees. They usually work off of lists of criteria of what makes an employee, and if you fit too many of those criteria while, on paper, passing yourself off as a contractor, then you and your client can be in for a world of pain. One of the criteria that makes you look more like a contractor and less like an employee to the government is providing your own facilities like the computer you work with.
And, last but not least, it's just not a good way of dealing with the planet's resources.
I think there are absolutely a list of things that I don't want the company doing on my hardware, but I'm okay with on their hardware.
Off the top of my head, remote wipes/resets make sense. Frankly, I prefer the company has that option, just in case I lose my work laptop. Encryption should cover it, but I'll take the backup.
Compliance agents also have a legitimate reason to exist, but I don't want them on my personal PC. Some places maintain lists of allowed software (I think in part so they can track/inventory them for compliance stuff). I respect that they have the right to restrict what I install on my work laptop, but I reserve the right to install whatever I please on my own computer.
It would also not be insane for a company to do automated backups of company laptops to company servers. You want a way for Joe in marketing to get his data back when his cat pees on his laptop. I do not want all my personal documents on company servers.
This is really the thing people miss. It's a company laptop first and foremost and the right to privacy goes away.
The amount of compromising content we've seen and or found on investigations is mind blowing. No one needs that on a work computer. Keep your private life private from your employer.
The OP was about a contractor though. The way I think about somebody who is truly a contractor is that they are their own IT department, and their capabilities in the IT space should be at least on par with whatever the client's IT department enforces for in-house employees.
The above two comments however seem to be arguing from the viewpoint "this is just an individual person and any individual person surely needs babysitting by a big mighty corporate IT department because otherwise they can be expected to do stupid things like losing storage media with important data and not having backups, never doing updates, having their computers full of spyware, intermingling private stuff and work stuff from different clients in such a way that there's data leakage, etc. etc."
If you want to truly treat a contractor as a contractor, you should think about it as your IT needing to interface with their IT in such a way that it makes sense for both parties. And "here, use this laptop" is just frequently a bad solution from the point of view of the contractor's IT.
I also heavily object to the notion that any expectation of privacy goes away on a company laptop.
You can disagree with the expectation of privacy but it’s been held up in court multiple times that personal actions ok a corporate resource are not protected.
Ideologies and realties are different. If you care about personal data, don’t put it on the company. The company however has a huge liability with your personal data. I’ve mentioned else where I have dealt with issues of personal data becoming an issue for the company via blackmail, or in a couple cases, the company was legally required to report child pornography. So yeah, if you don’t want the company to know, don’t put it on their equipment. If you buy dedicated equipment for work, use it for work and work only. If you want to use your machine for Everything, that’s fine, but understand the risks and the lack of an expectation to privacy.
We're agreed that separation of work and private spheres is good practice.
But I'm not sure what country and what legal concept it is that you are referring to when you say "it's been held up in court multiple times that..." I'm based in Germany and have recently undergone GDPR-related training with a lawyer specializing in privacy law. In the training, the lawyer explained court cases that involved regrettable intermingling of work and private data in a company's IT. The result was that the law then started looking at that company's IT as being more akin to a telecommunication provider, with similar legal provisions coming into effect regarding telecommunication privacy.
Also: Anyone who lets their mind jump straight from "privacy" to "porn" is missing a big part of the picture of what privacy is all about. The way I think about it, it's a basic psychological need. Your psyche can be in a "public mode" where it assumes that any and all information flows emanating from you are out there for everyone to see and do with as they please. The result is that you have to put up huge amounts of self control which is psychologically exhausting. Therefore, the psyche seeks private spaces, where you don't need to control yourself as much because you know that nobody is watching.
The fight for privacy in the digital sphere is about ensuring that, just because our psyches are nowadays constantly linked to digital devices, this doesn't result in our psyches having to operate in "public mode" all the time.
It's about establishing clear delineations of who gets to receive what information flows relating to you and how they can potentially use that information against you.
For example: A company does time tracking through Excel sheets, but they also have IT security logs that keep track of people logging into and out of work machines. One day the company decides to run a project: They put the two data sources side by side and identify employees likely to be cheating on their time sheets. They fire the employees. ...this sets in motion a psychological effect in the remaining employees: They realize that they have a very poor understanding of what information the company's IT is collecting, and they don'T know how that information might one day be used against them. So all they can do is assume the worst. That means putting their psyches in "public mode" all the time, assuming the machine knows and sees everything, and the employer will use that information against employees at whatever time and in whatever manner suits them. The psychological damage done by this is precisely what we need to avoid!
And the GDPR will usually actually prohibit such things: The company's register of data processing activities will tie the security logs to the purpose of providing IT security. And it will tie the Excel timesheets to the purpose of time tracking. If you start using the security logs for time tracking purposes, you are using the data cross-purpose and are in violation of the GDPR and risk a hefty fine. This is a model usecase of what the GDPR is actually good for, and it clearly relates to protecting individuals' reasonable expectations of privacy in relation to their company's IT.
Very informative. Thanks.
I still have two lying around. One of them was a 15” dell brick.
I had informed the client that I will be disposing of them when I’m back if they don’t handle it and that any and all third party liability well fall on the direct supervisor if he can’t organize the transfer.
Needlessly to say even me connecting them directly to the courier was not enough.
My guess is that the OP depends on the money otherwise he wouldn’t be asking for help. So either but a cheap laptop and then control it with barrier[1] from your main driver and don’t ask(because whatever you ask they will probably say no). Or let them ship theirs to you, but I’m willing to bet that it be worse than whatever second machine you get.
In the meantime I would suggest you look for a new client because judging from experience there is a lot more pain to come. I didn’t do it in time and ended up paying dearly for my lack of initiative on that front.
I have a dedicated laptop for a client that is in a room of my basement. I remote into it from my personal machine whenever I do work for them. Works very well!
How comfortable would you be if you learned that your cloud provider allowed a contractor in a random overseas country to connect to your production servers from a laptop on which he also read his personal email?
Would you like them to have some controls in place to prevent that?
Would you like that to be enforced consistently and audited?
Would you like them to provide you with a certification that their procedures to ensure that doesn’t happen meet some minimal standard?
Congratulations, you have invented ‘demanding SOC2 compliance from vendors’.
And the upshot of it is that some contractors have to put up with jumping through some hoops.
"either working on screens that are too small (14" really isn't enough to be productive)"
I work primarily from a 13" xps. Given the high-res display + that I can switch desktops easily via i3, it's really a non issue for what I do.
You can also use a dock. For my work laptop, I use the Caldigit TS3+ thunderbolt and it's great.
If you can afford to spend a bit of money on the problem, it's possible to use something like PiKVM or KVM-over-IP to just leave a stack of client laptops or mini-PCs out of the way somewhere and connect to them remotely in a reliable way, so you can reset the machine if the remote desktop software fails.
No sane person would use their personal logins or private keys for customer work. Create ones for that project! Yes, it is a pain but having a bunch of expensive lawyers breathing down your neck is an even bigger one!
You want to separate your customer's work from your personal or other clients' data, even if they don't install any spyware on your computer. How are you supposed to ensure that you don't accidentally breach any NDAs (that you, no doubt, had to sign) if you are commingling the stuff?
I’d like to say no, but I’m not sure it’s an unreasonable request.
I’ve recently been contracting and the only private account I used was GitHub, and that was a conscious decision to maintain a single public developer identity.
Otherwise, I expected them to provide all hardware and software required to perform my role.
And likewise, for security purposes, that’s exactly what they wanted as well.
Tho I would note, they wanted to perform a background check prior to starting. And while i didn’t have a problem with the employer having those details (for the period of the contract) they performed this function through a 3rd party, who has stated they won’t delete my data (I will be following up post-contract). I was not happy for a 3rd party to have this data.
I would also not login to or install any apps, or certificates on my phone. Again, if that’s required, send me a phone.
The company I was working for had SOC2 / ISO2700 / what ever and I think this is exactly why they wanted all this. But it suited me to seperate things as well.
Why not make a Virtual Desktop for your Developers instead of ~forcing them to install crap-ware.
I have worked in such an environment. It was likely running on an overprovisioned server, that had to be accessed via an Internet Explorer ActiveX plugin. I'd rather be using a green-phosphor VT100 at that stage.
> Internet Explorer ActiveX plugin
Virtualized desktops have been solved. All the major players offer them. FFS, you can run Xbox One games in the cloud and play in your browser now.
That sound's like it is a looong time ago, todays VD's are extremely responsive just have a look at Shadow for example.
~6 years ago. I'd be surprised if such things weren't still deployed in places.
Thats a solution thats used to be common for well run large shops(well sometimes using things like remotely accessed hardware prior to modern virtualization) but as this creates work and expenditure for the client many smaller outfit's kind of want it both ways and expect the contractor to fund hardware/software completely identical to what their IT is deploying for employee's.
If you can pay 7500$/y for spyware you can surely pay for a VDI-Product.
The problem is rarely the monetary costs alone it's more about the need for handling an one-off situation that require special policies, and becomes an special item on the budget.
For large companies supplying VDI to consultants tend to be an standardized package that gets billed back to whatever project is hiring the consultants but for mid-sized organisation VDI is a big scary word that's going to require special handling.
Most desktop support teams are completely dependent on standardization to the point where they tend to turn into complete control freaks, that panic at the thought of anything that is not "by the book", so they often just apply the book without additional budgets to external consultants.
I am not interested in politics or humans when the problem is a technical one, i understand you but that's my stand ;)
why not spin up a VM dedicated to that client and confine all crapware to that VM?
Absolutely!
One of the legal guidelines for whether or not someone is a contractor (vs an employee) is whether they provide their own tools.
It’s not a hard rule. It’s just one of a number of tests. But contractors are generally expected to provide their own tools.
But in that case I’d agree with the OP. I’m not installing what ever you want on my hardware.
I probably was more of a “temporary employee” than a contractor. But what’s the difference at that point? I was paid more than the value of entitlements as cash. It suited both parties, and was mutually agreed.
In hindsight, having them provide the hardware, and then handing it back at the end of the engagement would be my preference. It reduced any risks for them and me.
Tho I can easily imagine on/off or short infrequent contracting scenarios that this would not work for.
> I probably was more of a “temporary employee” than a contractor. But what’s the difference at that point?
If you’re a temporary employee, the employer is responsible for payroll taxes, and has additional obligations to you (depending on the state). You’re obligations— both to your employer and to the IRS—are different as well.
I’m addition to unlawfully skirting regulations, misclassifying an employee as a contractor is essentially stealing from the employee by reducing the company’s tax burden and increasing the employee’s.
You could reasonably invoice an upfront tooling cost ("isolated development workstation to meet customer IT policy requirements. Apple Macbook Foo, $BIGNUM")
If you want to be really cheeky, could get some value-added margin on it too, and as a bonus, AIUI it would be yours to keep after the engagement, rather than having to return hardware they've assigned to you.
Might be tough to get past finance though, unless they really want that certification :)
We've got a few machines we've inherited this way. For one client they needed confirmation that we had a sanitized infrastructure for our pentest and that we captured and logged all traffic to and fro. We billed them for the equipment and network link etc. Upon completion of the project we provided a certified copy of data destruction in accordance with their policies / guidelines and were left with a bunch of stuff. Scored two GPU cracking rigs, couple laptops, and some Cisco gear. I didn't complain. Granted now those rigs are a bit dated, but at the time 1080's were hard to get during the last mining rush so they definitely weren't cheap.
If they are sending a dedicated laptop, I'd recommend using dedicated accounts for that client. I don't use personal accounts on my work system even as a full-time employee.
I would strongly advise to not use personal accounts for customer.
Yeah like sending invoices arranging contracts I use my main mail account. For doing customer things I rather setup new email account or get one from the customer.
Login in GitHub/whatever on your other laptop. Then send via e-mail bits of code you need from one to the other. When asked why is taking longer vs. other clients point this as the culprit. And also charge this client more.
Between a) using my own machine, b) using a company machine and c) firing the client, option B is pretty sensible. Option C is for when the client won't compromise at all.
You can always make separate github accounts, SSH keys, and so on, specifically for the job.
Just compartmentalize your passwords and private keys. That said, private keys are generated on the dedicated hardware and always stay there - that's all you need to manage them.
+ 2fa + sensible rotation of keys and passwords (especially if you use say github for personal / work / multiple clients) - they can have the password at that point and it is of limited use.
I store all my keys in a yubikey (including ssh keys). The client can’t have the keys if I don’t have the keys.
I don't use my private accounts for anything related to this client.
> personal
Uh, hell no. You should have business accounts.
Personal and business never mix.
if only you could make a new set of those...
a lot of developers have separate everything
this can be bad when encountering the recruiters and hiring managers that still look at github activity as clout, but these days you still have to pass the technical interview no matter how much clout you have so I wouldn't worry about it
> If you want to ship me a dedicated laptop for your engagement, I would be happy to install whatever you want on it
And they will install a trojan which would eavesdrop your talks, scan your home network and analyze its traffic.
Capture traffic and sue the f outta everything they do that‘s not covered by a contract.
Someone has to start stopping this madness and protect less informed people. We are all steering into a dark future. And i lose hope when i see all these smart programmers complaining but not stepping up.
> Capture traffic and sue the f outta everything they do that‘s not covered by a contract.
Whatever the app they would ask you to install would do probably is going to be allowed by its EULA (and I bet the EULA is also going to prohibit you from analyzing the app and whatever it does/communicates) and chances are you don't read it. And even if you do you most probably agree because you know all EULAs are brutal and there never is a button to object its specific part and continue.
What we need is legislation to recognize all the data and metadata about your PC, all its software, your home networks&devices and their usage a kind of personal data and apply the same rules GDPR applies to tracking cookies - giving you the right to continue without agreeing to be spied on.
EULA has nothing to do with this. Question is whether software installed by company you’re working for spies on you in ways you haven’t agreed to. If so, sue them, not the co that wrote the spyware.
EULAs are irrelevant if they go against the law. You don’t renounce all your rights because some law intern wrote in the EULA that you sold yourself into chattel slavery. And for those things you can give up (such as some of your data) you have to give explicit consent, clicking “I agree” under a 500 page unreadable legal document is not enough.
EULA is between the user and the vendor of the software, it isn't a agreement between the user and the users employer. Capturing traffic like that runs afoul of hacking laws.
The employer can just give you an executable and say you must install it. The executable would show you an EULA and require you to accept it.
That's a stretch.
Most companies don't want to spy on their employees' free time. They want to 1- make sure they are compliant with the law and their confidential stuff is secure and 2 - make sure that you are actually working for them when you say you are.
Installing something to listen to non-work related stuff serves neither of these goals, and would open them up to lawsuits and PR nightmares.
The people who decide the spyware must be installed are likely so far removed from the actual spyware and the people who admin it as to have no knowledge or understanding of the distinction.
Yep, exactly. I used to put client devices on a segregated network and tunnel their traffic out to pfSense running on a cheap cloud box somewhere. Worked well.
(I should say that intentional monitoring of my private comms was never a concern for me when I freelanced, but I was somewhat worried about infections in my clients' devices moving laterally to my home network.)
As someone who isn't well versed in networking could you describe your setup in overview? Like, what software/hardware, etc.? Thank you
Mini-rant: people on hacker news frequently undervalue their knowledge and don't consider the things they know to be of much worth. A classic example of this is the "I can't see why dropbox is a thing, simply build a cloud file sync service - easy" post. This poster is running their own semi-professional router (pfsense) on whitebox hardware. I would not want to do that unless I already knew what I was doing or wanted to spend some time learning.
The type of network segmentation being discussed is not rocket science, but it's not trivial either. VLAN segmentation can have tricky edge cases that cause things to break in a non-obvious way, nothing that can't be worked around but for someone who "isn't well versed in networking" would probably be more than you're up for. Also keep in mind that you can't do this with most consumer networking gear becasue it's too complicated to setup and support without some experience and knowledge.
I'd not recommend VLAN segmentation unless you want to become someone who is more versed, which I don't oppose, but it's not a switch you can flip in 5 seconds and never think about it again.
I very much agree with this. I don't think that this is the solution for most home users/consultants working from home.
The more obvious solution would be to get a separate WiFi router and internet connection strictly for work purposes.
At that point you could also consider it a 100% home office expense and it may be tax deductible (talk to your accountant).
Yes.
Sure. I didn't actually use a VLAN: I had a spare TP-Link router lying around, so I installed OpenWRT[1] on that and gave it a static IP on the home network side, then plugged it into my broadband provider's box. On the cloud side, I basically followed a guide, maybe [2] but I don't remember exactly. Once I had pfSense installed, I first set it up as an OpenVPN server.
I then went back and configured the OpenWRT box to create a WiFi hotspot and serve DHCP on a different subnet to that used by the home network. I configured an OpenVPN client tunnel from the router to pfSense, then set up a NAT ("masquerade") from the segregated network into the tunnel. I think I actually left a couple of ports open on the OpenWRT from the segregated network, but properly I should have firewalled them off so that the router was only accessible from the home network, since I doubt OpenWRT has been seriously pen tested by anyone. I'd probably also use Wireguard if I did it again.
The above config worked, but the CPU on the TP-Link was too underpowered to get more than a few Mbit/sec throughput. Since I didn't particularly care about having a VPN (I was going to throw this traffic on the internet anyway), I messed around and managed to change the tunnel type to L2TP. L2TP pretty much just takes the packet you give it and adds a UDP header for routing, so that approach gave me full bandwidth. I think I had to mess around a bit more getting MTUs set correctly to account for the L2TP header, and maybe had some trouble with auto-restarting the tunnel on failure.
One of the (flagged) responses to my original comment was "Who the fuck has the time to do that?" I actually think that is a fair comment. This all took a day or two to set up and debug, it isn't something that the casual user is going to do and, to be honest, I probably wouldn't have done it either except that I wanted to play with pfSense.
I'd do it again, though -- it was fun.
[2] https://silasthomas.medium.com/how-to-import-a-pfsense-firew...
Thank you! Though I agree with the sibling comments that it's probably not something to dabble in unless you're pretty confortable with this... May I ask, is this somewhat in your area of expertise, what kind of development do you do (supposing you're a developer?). Sorry if too inquisitive, just curious :).
I think you should dabble in it! This stuff isn't magic, it's just a bit esoteric in places. That makes it a great (and valuable) skill to learn.
I do cyber/data stuff, often on the network-y end.
Who the fuck has time to do that?
This is what I do for a variety of different things:
- $CORP devices are on a VLAN + Wifi that has access to the internet, but no other internal networks
- Internal network for file servers/printers and the like
- Personal device network, think laptops/phones/tablets that are mine, can reach internal network
- IoT network - Think sensors/robot vacuums/"smart devices"
- Guest network, for well visitors to my home
- AirPlay network, has all my Apple devices on it to allow for music to be airplay'ed to TV's/HomePods, can be reached from internal/personal/guest network
Now I also understand that I am outlier, I am running a fully segmented, firewall, traffic inspected/logged home network with small business or even large business network gear, with FreeBSD as the router/firewall with a managed switch/WAP platform from TP-Link.This is not something the average home user or consultant is going to setup/configure/manage and I don't expect it either.
The worry that the $CORP device will be abused to "validate the security of the network its connected to" is very much a possibility. Most corporations have no desire to do so, and endpoint protection is their primary goal, and they don't need to scan your home network to do so, it is all local to the device. It's about protecting the integrity of the device, not the rest of the network around it.
Which TP-Link switch do you use?
I use the Omada SDN business line-up.
> And they will install a trojan
They WILL
> which would eavesdrop your talks
That absolutely WILL - use the microphone and listen to everything being said (why not the camera too and watch everthing?)
> scan your home network and analyze its traffic.
That absolutely WILL - do all this stuff
I mean if this is definately going to happen, then the company can go ahead and cut an 8 figure cheque straight away. Which company and where do I sign up?
> why not the camera too and watch everthing
Because who doesn't have a lid/sticker over their webcam yet?
I have also kill-switched the built-in mic in the BIOS set-up but I'm not sure how secure this is. I would prefer there to be no built-in microphones in any hardware (except phones) at all. Sadly every modern laptop is equipped with a mic.
Most mics have mediocre sensitivity. Stick a blob of blu-tack or similar cohesive putty on it, take it off when you want to use the mic. Test by making recordings -- some machines have two or three mics.
That's one plus point for desktops and nettops with no-frills motherboards. No mic, no speakers, no bluetooth, no wifi.
If this is really the kind of things this company would do… why are you working for them?
Because people need money to pay bills. And it's a sad fact that companies who do shady shit tend to have more money. Simply because they outcompete companies who don't, all else being equal.
Tugging against this evolutionary pressure is really hard, not only for individuals but also on the society level.
"Just quit LOL!" is a commendable act of grassroots activism, but not everyone is able (or willing) to afford such luxury.
> If this is really the kind of things this company would do…
Because, until last week they didn't.
Now I have to figure out if I'm just an unreasonable, stubborn old guy, or if this requirement is out of band.
Tell them to provide you with a Virtual Desktop or a dedicated Laptop..never install Spyware on your own machine.
I like others idea of getting them to spring for a dedicated computer. You just have to make it palatable to their accounting department. Maybe lease a laptop and expense the payments or something. If you have an accountant, I'd consider asking them for suggestions on "good ways" on how to make your client pay for your laptop.
In my case, I just outright said "hey, you guys I really want you to be my client but I'm gonna need a new laptop". So we bought a new laptop as part of the contract.
Presumably the client will terminate your contract with them if you don't comply. So you don't actually have to install the agent until the day that they terminate the contract which is also the day that you no longer have to install the agent.
It is out of band. Refuse or this becomes normal.
That's why you tape off cameras and stick a needle in condenser mics on any new laptop.
Don't they use MEMS microphones nowadays? (https://en.wikipedia.org/wiki/Microelectromechanical_systems) Either way, solid advice!
Learn a thing every day I guess. Nonetheless, is inconsequential to task at hand if you apply the "Rosemary Kennedy" method to the laptobotomy: activate the mic, jam that needle in the hole and wiggle it around until the girl can no longer sing, no activity is detected on the vu-meters and nothing is played back in the recording.
You have audio-video input sensors on all sort of programable devices nowadays, even some TVs can be turned into two way communication devices. That's a nice 1984ish vibe to ruin your late night matrimonial TV browsing.
sticking a needle in a MEMS microphone is just as effective, you just have to [find the damn thing, and] push the needle all the way through!
If brute force isn't working, you're just not using enough of it yet.
Where are these IT depts that have all this unlimited time and resources to spy on their employees?
Sure you might get a bad actor voyeur, but as a matter of policy, companies just don't care what you're doing at home as long as their security interests are protected.
Don't put a work laptop on your internal home network!
I have a separate network for work machines at home, which goes straight to Internet and can't route in or out of my actual home network which is behind its own firewall.
Go to your router's settings and create a separate wifi for just these work situations. Connect client hardware to that wifi only. This keeps them out of your LAN. If your router can't do this, get a better router.
You keep your work laptop in your home network? Tut tut.
If I couldn't trust my company on my home network, why would I work for them?
> If I couldn't trust my company on my home network, why would I work for them?
Your employer isn't your friend.
It might be an awesome company to work for perhaps, but it's still a company (unless you work for like a 2 person startup). A company subject to audits and regulations and all kinds of other pressures (some of them actually valid, though many are theater) to monitor and control data and flows on their hardware.
You don't want those monitors etc on your personal data and network which has nothing to do with work.
So, keep them separate is the best possible advice.
Yes my company monitors the network at work and on my work computer, as they must do so.
But they are not allowed to scan my home network and other devices, and they have no reason to break the law. I trust them to not do that more than half of the devices running on my home network.
Many of these technologies have been built in the assumption of a world where they run in the company LAN.
No reason to trust, better to isolate.
My employers laptop is in a separate VLAN. What makes you sure that no-one else than the employer has access to it. This laptop has Windows 10 installed for example. And a shit-ton of McAfee crap. I would trust my employer but not the many companies who have a foothold on the machine as well because my employer is too cheap to install decent stuff on it.
I work frim home ... pandemic and all that. What else am I supposed to do?
Not really. VLAN provides segmentation, but it does not provide any mechanisms to limit access to other vlans in your network - which are most likely routed by your router. You will need to add some L3 filtering (acls/iptables/whatever) to isolate segments.
That's only true if use of VLAN tags is controlled by hosts; if you use a smart switch to assign VLANs to ports it's pretty much as-if you have multiple, physically separated networks.
You'd still need the router to tag/untag those VLAN's and allow traffic to flow. So if the router does VLAN tagging but just routes between the different network segments you haven't fixed anything.
You'd also need a firewall, and to configure it correctly.
All nice, but now you need managed switches and stuff plus some amount of unbillable time to configure it all and fix it when it breaks. Might be worth it if your bill rate accommodates it though.
Segmenting your broadcast domains doesn't help much if traffic is routed freely between them.
Sounds like a great approach. Any recommendations for such a switch for WFH?
For independent technical consulting for over a decade, I had only a small number of high-value clients, so I ended up dedicating a ThinkPad to each client.
I also had email account specific to each client, so that I could have a mail program on that ThinkPad access only the respective account.
There are many reasons to do one laptop per client (especially if you're WFH, not traveling), including not exposing personal and other-client stuff to whatever weird stuff is in the build environment of one client.
Another reason, though this never came up for me, is that there can be legal orders to permit inspection of the computer, online accounts, etc., including by computer forensics. If that happened for one client, that could be in conflict with your obligations to another client (as well as in conflict with your SO's private vacation photos, if they were on the same device). Being able to reassure that everything for a client was compartmentalized to certain devices and accounts might come in handy.
At one point, I even had color-coded labelmaker tape to help keep track of what was compartmentalized to what. And photos of the devices with the physical labeling on them, in case I ever needed to convey that I took it seriously.
(Related: One time, I had a hard drive fail such that (despite encryption) I couldn't do an approved wipe of it before disposal or warranty return. That client's compliance policies required that I physically destroy the drive platters, and ship the remnants to them via Registered Mail. It was a slightly fun/cool exercise, especially since the platters shattered nicely. And the neighborhood of $100 lost was well-invested in professionalism goodwill with a client who paid a few orders of magnitude of that amount over time.)
(In some ways, I'm now happy to no longer be running a consulting business, mainly because a predictable, consistent amount of money just appears in my bank account every couple weeks. :)
> If you want to ship me a dedicated laptop for your engagement, I would be happy to install whatever you want on it."
That's what Statnett in Norway did when I did some work for them a year ago (Lenovo X1 Carbon). The difference being that installing anything on it was pretty much impossible. All traffic went through the Statnett VPN. It's the most security conscious company I have had any experience of.
But I was also able to use my own laptop by installing the Citrix client and that was much better. I had never used Citrix before and was pleasantly surprised at how fast it was.
Keeping their software segregated is sound advice, but as they are your client there are a couple of other ways I'd offer to handle it:
1) Keep all software related to work for their company segregated inside a VM. Then you can install whatever they require without interfering with your main system or potentially exposing data for other clients.
2) If they want a separate physical system, tell them you would be happy to provide it for a fee: an upfront fee for the cost of the system and an ongoing fee for maintenance of it. Be sure to mark everything up as you don't work for free.
Since you're not an employee, you really shouldn't be asking them to provide hardware as (in the U.S., at least) this could create tax problems.
We provide hardware to contractors all the time. They don't own the hardware so the loaner does not trigger a taxable event for them. When the contract is over, they return the hardware. The hard part about this in the current day and age is getting the units through customs.
In the U.S. the issue isn't a taxable event, as in a gift. The issue is 'independent' contractors working on equipment supplied by their 'client'. This is one of several tests the IRS can use to reclassify them as an employee for tax purposes.
This tends to be more of an issue for solo contractors rather than contract houses as the IRS tends to look the other way on most of the larger contracting outfits.
Can you run all that in a cloud vm, and RDP into it for work?
This is the better option. I used to have a separate laptop just for one client and it was a pain in the ass. What happens when it gets damaged? They could charge you for the repair. Not worth the hassle of keeping track of it, lugging it around everywhere, keeping it charged & updated. Definitely a pain in the ass. Just do a cloud VM that the client owns with VPN access to the client's network. These are common now with Amazon Workspaces.
Seconded. We use Amazon Workspaces with access to our VPC for an offshore contracting firm we are working with.
^^^ - This!
You don't 'require' anything on equipment you don't provide. full stop. period.
> If you are a freelancer then your contract should allow you to do work for others
Not only that, in some EU countries it's even illegal for a freelancer to work for 1x customer.
> Not only that, in some EU countries it's even illegal for a freelancer to work for 1x customer.
How does that work? Is it more "exclusively for one customer" similar to the how the IRS rules work?
Simply - if you are working only for a single customer within a certain time period/receiving the majority of your income from a single source, you are considered to be an undeclared employee and not really a freelancer (= business) - and that exposes both you and the company to big fines. It is not the only criteria for this but a pretty large one.
Both France and Germany have such laws but other countries do too.
The point is that many companies would otherwise stiffle workers by forcing them into becoming freelancers because then they don't have to provide legally prescribed healthcare benefits, paid vacation, contributions to pension, etc. that employees get.
And at the same time those workers don't have really the position to meaningfully negotiate their contracts to e.g. include extra pay for that missing vacation or healthcare/pension insurance. I.e. we are not talking about IT consultants but delivery drivers, cleaners, etc. - low paying jobs.
I.e. Uber's business model - and that's exactly why it was banned/had big problems in many EU countries (in addition to completely flouting the existing taxi service regulations).
In France in practice you can work 2-3 years for the same client without too much problem from what I could see. They just make sure to change the mission once in a while, so that the contract does look like it's lasting too long.
Well, that it is poorly enforced doesn't make it any less illegal. If you get an audit from the social security or tax office there, good luck.
Uk has this law too. If you freelance to just one company you’ll be classed as an employee of that company instead of self employed.
The reason is companies were abusing self employment laws by only recruiting freelancers even for full time roles so they didn’t have to provide sick pay/parental leave/holidays/pensions/etc.
This is almost exactly what I said when I was asked to install end-to-end protection software into my laptop as a freelancer. I stood my ground and they understood. They initially said they would send me another laptop to work from, but eventually relented on the requirements altogether and just limited my access to customer data.
I would just charge them for a new laptop that is used exclusively for that project.
This sounds good on the surface but then you are still giving in to unnecessary surveillance during your workday.
I would want a big compensation increase to deal with this. Like on the order of 2x my rate.
The concept of "working from home" forced by the pandemic is harming the "remote working" community by extensive invigilation and moving harmful office behaviors to private space.
I talked to an Intel HR person (informal chat, I never applied there nor planned to) 2-3 months ago and after I stated that after a decade of remote work, I see pandemic-driven introduction of harmful concepts like spying on previously trustworthy contractors by control-freaking managers that have no idea how to prove themselves in new reality, I was given a look which would usually be reserved for a psychotic person believing that they're being watched 24/7. Quite an unique experience, contrasting with how HR folks are trained to do sect-like "love bombing".
You want me to work for you and deliver results? My pleasure - that's what I do.
You want me to hang a company logo in my place and sit in front of camera multiple times a day, log every minute of my time and creep on me in other ways? I never worked in "Office Space"-like environment and I'm not planning to. Go fuck yourself, I'm out.
This reminds me of how adblocking software worked great when only a handful of nerds used it. Then when adblocking became more mainstream, sites bothered to develop workarounds to make ads show up anyway. Adblockers now try to develop workarounds to the workarounds, but it's a constant battle.
Was it great when I was the only one with adblocking software? Yeah, for me. But it was worse for society as a whole, most of whose members had no adblocking whatsoever.
And now we have the majority-share browser vendor Google moving to cripple ad blockers with ManifestV3.
So I guess we're winning?
We as in tech nerds, or we as in whole society?
We just need some proper laws to remove the possibility of interfering with adblocking software
> sect-like "love bombing"
So poetic.
There's a load of nonsense in the comments here today.
* Drata is a vendor that helps a company navigate your SOC2 compliance process, by organizing all the controls and helping you gather evidence that you have done so. For instance, they'll connect with Github and make sure everyone with access to your repos is a company employee. If you don't use Drata you have to gather this evidence yourself, repeatedly over months, and it's a pain.
* The Drata agent is a pretty innocuous thing. It checks you have done things like turn on disk encryption, have updates enabled, and that the screen locks if you walk away. It does NOT monitor employee's activities. These kind of security checks are incredibly common and are required for certifications like ISO27001 and SOC2. SOC2 is not really optional for large enough b2b SaaS.
* The poster says "Their business model (in my case) seems to be to take money from companies to spy on their employees/contractors, and then they sell the employees/contractors private information to "targeted advertising".
Do you have any evidence for this?? I've just been involved in selecting Drata as a vendor for SOC2 compliance planning for our company. If this is true it's a huge deal and totally against my understanding of their business model. It honestly sounds like bullshit to me! But if you have evidence that they do this, please let us know.
* As a freelancer, whether you are required to install security monitoring software is definitely an open question. If you're delivering work separately and not connected to company systems, then ok. If you're basically just acting like any other employee, and connected to the company systems, then you will probably have to do this. Because otherwise they would fail SOC2 and managing your legal status as "Freelancer" vs "Employee" (for tax reasons??) is not worth not being certified.
> The poster says "Their business model (in my case) seems to be to take money from companies to spy on their employees/contractors, and then they sell the employees/contractors private information to "targeted advertising". Do you have any evidence for this??
I read their Privacy Policy. They are quite explicit about what they plan to do to you. I raised the issue with them in an email (among 5 other issues). Their reply is in the header. Another issue I raised is that they expect me to accept undisclosed terms and conditions.
That said. I have worked with computer security at an advanced level, including consulting, training, penetration testing, design/implementation of x-platform server agents for monitoring and alerting, design/implementation of firewall. Once I designed an implemented a system to deal with NATO secrets (not very sensitive secrets, but still secrets) for a military subcontractor in EU. My computer is relatively secure. I follow best practices - and more. A hostile agent would decrease the security on my network. That was my first thought when I got that awful email.
Also, I don't get it why that need such a thing on my PC. I don't have credentials to production systems or production data. I am a software developer. I work with code and documentation. That's it. Anything I produce is reviewed by other developers and then tested independently by QA.
I am careful to be a freelancer, and not an employee, for several reasons. It means that legally I'm my own boss. That feels good (I have a great boss!) It also make it unproblematic to work on open source projects, without getting into discussions about who owns the intellectual rights to that work.
> I read their Privacy Policy. They are quite explicit about what they plan to do to you.
OK, well, I've skimmed it and I can't see anything that suggests they are going to spy on our employees and sell the data to advertisers. I hate to drop it back on you but which passages make you think they do that?
These things do often sound terrifying because things like "I'm going to use Google Analytics to see which parts of the product people aren't using so we can email them reminders" get turned into passages like "We will upload all your activity to a third-party advertising company for marketing purposes".
> I have worked with computer security at an advanced level ... A hostile agent would decrease the security on my network. That was my first thought when I got that awful email.
I believe you! 100%!
But you are unusual, and without verification a control such as "All laptops should have screens that lock after 5 minutes" won't be followed by everyone. NOT EVEN CLOSE to everyone.
> Also, I don't get it why that need such a thing on my PC. I don't have credentials to production systems or production data. I am a software developer. I work with code and documentation. That's it.
Sure. Another commenter in the thread has said that because of that this isn't strictly required for SOC2. I'm sure they're right.. but I'm not sure I want anyone working on our codebase at all who doesn't have basic security settings set on on their laptop (Again, I know YOU do :) )
Back to the using your own computer thing again - this is why I think lots of companies say "You use our hardware for all company work but IF you really really want to do BYOD then you have to accept some of these agents". Not sure if that's the attitude at your firm, but that seems reasonable.
> OK, well, I've skimmed it and I can't see anything that suggests they are going to spy on our employees and sell the data to advertisers
"We, our service providers and our third-party advertising partners may collect and use your personal information for marketing and advertising purposes: ... Interest-based advertising. ... We may also share information about our users with these companies to facilitate interest-based advertising ... We may create anonymous, aggregated or de-identified data from your personal information and other individuals whose personal information we collect ... and share it with third parties for our lawful business purposes"
Such "de-identified data" is often trivial to re-identify. There are research papers about that. It's well known in the security and privacy community.
Also, they use dark anti-patterns for opting out from them even using your personal data for their own advertising. "You may opt out of marketing-related emails by following the opt-out or unsubscribe instructions at the bottom of the email, or by contacting us at ..."
If Drata intended to be a nice, trustworthy security partner, use of any personal data for targeted marketing, or sale of any personal information would be opt in, not "out out if you can figure out how ...".
I have not read their terms of conditions or even their glossy information about the agent. I never got that far, as I declined to accept the terms and conditions for using their website. Already at that point, I saw red flags the size of Australia.
I don't believe for one second that Drata has any intention of showing any decency or that they act in good faith towards their customers or anyone else. If they did, they would have developed reasonable terms and conditions. What they have don't even distinguish clearly between the roles of a customer and an employee or contractor for their customers. Hell, they don't even define the term "Customer".
I think the OP is talking about the "How We Use Your Personal Information" on https://drata.com/privacy
That would seem to only pertain to their website. Yes, they're going to want to market it to you, so that makes sense.
The actual privacy policy for the product the OP is using is likely found in the contract Drata signed with the client company.
> You are 100% correct. > Source: I am the Drata CISO
May be you should go over your user agreement documents and:
1) Make sure that all relevant information is available, so a user can make an informed decision.
2) Distinguish between the user roles, and have different agreements for the different roles. One role is your customer. A second role is the employee of your customer. A third role is the contractor for your customer. A potential fourth role is the person(s) working for the customer that is responsible for dealing with personal and confidential information related to you, employees and contractors.
As of today, your user agreement is a mess, appearing as something you have copied and pasted together without much thought, except for how to cover your own asses. Including the ridiculed clause Microsoft is infamous for, warning that your software is unfit and unusable for any purpose.
it sounds like you’re trolling tbh
FWIW: I never do personal stuff on company hardware. I always assume that anything I do on company hardware can be tracked, even if no one is deliberately trying to track me.
I think you have 4 options:
- If you use a company-provided computer, install it on their computer. It's their computer, not yours.
- If you use your own computer, set up a VM for this client and install the agent in the VM. Then do your day-to-day work inside of the VM.
- If you use your own computer, buy / expense a dedicated computer for the job
- Politely refuse to install it and accept the consequences. (IE, you might be out of a job.)
Remember that you are paid to do a job. If you don't like the conditions of the job, you can always walk away. As an earlier poster mentioned, this tool appears rather benign.
Regarding company hardware and tracking: Many companies set up things like automatic backups, snapshotting, ect. These aren't meant to track you, but if you're doing personal stuff on your computer, it's very easy to accidentally leak things into the company backup that you might not want there.
Unfortunately, "I know a lot about security, trust me bro" doesn't satisfy the compliance box-tickers, for better or worse.
Maybe run it inside a VM?
>> The poster says "Their business model (in my case) seems to be to take money from companies to spy on their employees/contractors, and then they sell the employees/contractors private information to "targeted advertising". Do you have any evidence for this??
> I read their Privacy Policy. They are quite explicit about what they plan to do to you.
That's not evidence, only your interpretation.
>We do not share your personal information with third parties without your consent, except in the following circumstances or as described in this Privacy Policy: Affiliates. We may share your personal information with our corporate parent, subsidiaries, and affiliates, for purposes consistent with this Privacy Policy. Service providers. We may share your personal information with third party companies and individuals that provide services on our behalf or help us operate the Service (such as customer support, hosting, analytics, email delivery, marketing, and database management services). These third parties may use your personal information only as directed or authorized by us and in a manner consistent with this Privacy Policy, and are prohibited from using or disclosing your information for any other purpose. Partners. We may sometimes share your personal information with partners or enable partners to collect information directly via our Service. Professional advisors. We may disclose your personal information to professional advisors, such as lawyers, bankers, auditors and insurers, where necessary in the course of the professional services that they render to us. For compliance, fraud prevention and safety. We may share your personal information for the compliance, fraud prevention and safety purposes described above. Business transfers. We may sell, transfer or otherwise share some or all of our business or assets, including your personal information, in connection with a business transaction (or potential business transaction) such as a corporate divestiture, merger, consolidation, acquisition, reorganization or sale of assets, or in the event of bankruptcy or dissolution.
That is in their privacy policy (https://drata.com/privacy). They go into even more detail about the advertising they serve to you based on the information they collect. They suggest a drawn-out method to opt out of their advertising tracking. Honestly, this alone would rule out Drata for any similar project I was considering. How in the world is it acceptable for a security and compliance tool to gather and store personal data for the purposes of marketing?? With all due respect, how did you miss this???
The first paragraph says that it is the privacy policy with respect to the website. Why do you think it covers the data collected by the agent?
> Why do you think it covers the data collected by the agent?
The agent is not the only concern. Before you even get to install the agent, you have to provide personal information to their website (I believe - I don't now, because I rejected the TOS and don't have access to the non-public part of their website).
The thing is - Drata collects mandatory information from their customers employees and contractors trough their website. The TOS for the website is explicit about how they plan to use that information.
This is correct. The privacy policy listed here is for the website.
Source: I am the Drata CISO
> This is correct. The privacy policy listed here is for the website.
Is that the 100% honest answer?
From my understanding, your website is where you collect my mandatory personal information, if I agree with your TOS. It's not just a glossy brochure for your product - it is your product.
I cannot choose what I share with you. But you can choose with whom you share what information I provide. And from your websites TOS, you seem very eager to share it.
> Drata is a vendor that helps a company navigate your SOC2 compliance process, by organizing all the controls and helping you gather evidence that you have done so. For instance, they'll connect with Github and make sure everyone with access to your repos is a company employee.
In other words, Drata is a CYAaaS.
> The Drata agent is a pretty innocuous thing. It checks you have done things like turn on disk encryption, have updates enabled, and that the screen locks if you walk away.
Guessing it is privileged and self-updating.
> Guessing it is privileged and self-updating.
And closed source, of course. Must be totally free of bugs too, like all software.
At least it's using some OSS, like osquery, c.f. <https://cdn.drata.com/agent/osquery/queries.json>, so you can easily see quite a bit of what it's going to gather.
It is based on osquery and we are happy to share any information including our third party security validation of the agent with prospects/customers.
Source: I am the Drata CISO
You're completely right re Drata as a company (we use a different compliance vendor, but very similar setup re the agent).
You're a bit off on whether this would fail a SOC2 audit, thankfully. As the OP said, they don't have access to production systems, which basically means you can treat that employee however you want from a SOC2 (and ISO, and most other control framework perspectives). The company OP is working for can state "We do not require these controls on contractors without production access" and that is totally fine for SOC2. Pushing back on the agent requirement is totally reasonable!
That depends on how they wrote their policies. If they were careful, they left themselves room in their policies to be flexible about people who don't have access to prod. If they weren't --- and lots of teams aren't --- then it's tricky to go back and say "oops I got that part of the policy wrong, the new policy says we can do whatever we want in this case". Again: the real thing SOC2 is assessing is consistent enforcement and monitoring. It's not a "security audit".
Do you think? I wasn't sure because although he doesn't have access to production systems a lot of controls are around access to the code, e.g. Github.
But quite possibly you are right.
I've been through SOC2 (sat in with auditors and walked them through pretty much all of our stuff around source code and testing and building things). SOC2 is very much a "do you have policies for x, y and z" and "are you actually implementing those policies", with a VERY HEAVY emphasis on "are you doing what you say you'll do". There's nothing that says "You must monitor any place your source code could exist", but there's plenty that says "You must have a policy for change management" and stuff like. And you'll get dinged hard if you have a policy that says "We monitor every device that has our source code on it" and then turn around and have contractors you don't monitor.
That said, it's also completely trivial (on the auditor side) for them to say "Oh, we're changing this policy to 'We monitor devices with production access'". Good luck pushing for that to happen as a contractor, though...
My understanding is that it's not completely trivial to make these kinds of policy changes once you get past your Type 1. This would be a nitpick except that it implies something important about how you should handle SOC2: don't be ambitious or expansive in your Type 1 audit, and leave yourself room to see what's going to work long term. This is something I've seen a lot of people mess up.
You in fact have no idea how "innocuous" the Drata agent is. You know only what Drata tells you.
It would be grossly irresponsible for a contractor to rely on one client's assurances about what a third party told them its software did, and expose other clients' data to errors or abuse by that third party. Or their own personal data, for that matter.
> Reply to parent
Next client! This is potentially an indicator of a bad customer or management.
My hardware = my software (not yours) end of discussion. Don't like it? Have fun finding someone better that will put up with your nonsense.
Now if they provide a laptop with corp network access etc that is different.
I'm a professional similar to yourself. 15 years as a consultant and freelancer.
I agree: a company asking developers to install security monitoring agents like this should also offer company laptops. Same with mobile phones, actually, for remote wipe profiles and location tracking and things.
If they don't offer company hardware I don't think they can rightfully demand agents be installed. But if they do, and if you decline to use it then you have to accept agents on your own machine.
Not sure what OPs situation is - but I'd think it very reasonable to go back and say "if you want to install this you have to provide me a laptop"
Maybe I'm very naive here but does SOC2 explicitly require monitoring? Can't a contractor simply sign a form that says all relevant rules are followed on their end and thus if that's not the case, the company is off the hook?
If active monitoring is really required than the only solution I see is one device/customer and thus as a freelancer I'd have to request said device be provided by the customer.
SOC2 auditors require evidence for controls being followed - repeated evidence over a long period of time.
What's evidence? I can't remember the exact details but imagine something like ... a screenshot of the security page of the Settings app on macOS, taken by every employee, on every laptop, once a week.
Or install the Drata Agent on all company laptops.
Perhaps a form would be enough evidence that you wouldn't have to repeatedly collect it... but I doubt it. Because otherwise couldn't ALL employees just sign a form and that would be that? Auditors know that people don't actually follow the rules carefully even if they claim they are going to.
In principle, a company’s security policy could be to enforce security settings on employee computers with an agent like Drata, and contractually guarantee those same settings for contractors. The evidence during the audit would be the Drata report for employee computers and the signed contracts for the contractors.
The underlying problem is they are asking you to trust Drata implicitly, sight-unseen, and give them unfettered access to their laptop. There is nothing preventing Drata from changing their minds and altering how their agent works or what it collects.
Then there's the unintentional aspect. There is, of course, no guarantee their agent is bug-free. Data leaks and compromises happen all the time, by every facet of company (large, small, respected, hated, etc).
This is really a huge risk IMO. If anything, it's being downplayed and far from "nonsense".
Grown-up companies doing SOC2 usually provide developers with hardware, and in that case the company is installing an agent from a vendor they have selected, onto their own computers.
True, the grey area is slightly odd situations like this where the protagonist is a freelancer rather than an employee and presumably also the company isn't willing to provide hardware to them in that case?? Because they're a freelancer?
Sounds like a case that isn't going to survive too long in any company that's getting serious about compliance and risks.
> Do you have any evidence for this?? I've just been involved in selecting Drata as a vendor for SOC2 compliance planning for our company. If this is true it's a huge deal and totally against my understanding of their business model. It honestly sounds like bullshit to me! But if you have evidence that they do this, please let us know.
Unless their agent is Free Software, the reasonable end-user assumption is that they are doing malicious things. As we've seen time and time again, it's inevitable - either now, or in the future, they will come up with some bright idea for more features that involve being directly user-hostile. Your company may rely on their legal contract assurances, but I as an individual cannot.
My minimum policy for running sketchy binary blobs is in a VM on another machine. If you're adopting this type of software, it's incumbent upon you to make your corporate policy one of supplying computing equipment for everyone who is expected to run it. And also accept that employees will physically damage microphones and webcams to stop your supplied machines from acting as surveillance bugs of their personal dwellings.
This is just a way of saying that every mid-to-large-sized company in the world is doing malicious things, because all of them depend on closed-source agent software of one kind or another. And you might be right about that! Certainly, the industry has not taken the threat of agent-based management tools seriously enough.
But what the hell is your point? This is about as practical an argument as "the only reasonable software for your company to run is free software". Even if it were true, it's so far outside of industry norms that you might as well be asking them to ship all their products on BeOS.
My comment was not about what is reasonable for companies to run. In fact, I implied that it is reasonable for companies to run proprietary agents, because they have contracts and legal remedy through the courts if the products turn out to be malicious spyware.
My judgement is confined to what is reasonable for individuals to be asked to run in their own operating environments. Individuals lack recourse through the courts, barring some watershed law like a US GDPR with a provision for adequate liquidated damages.
I also pointed out a straightforward path if a company still wants their contractors/employees to run unaccountable proprietary crapware on the devices used to do work - purchase dedicated devices for contractors/employees to use for work, rather than expecting to get exclusive software installed on shared equipment. Coupled with an appropriate home network set up, this provides complete compartmentalization, assuming any sensors can be disabled.
(deliberate employee-hostile software, eg employee activity trackers, is out of the scope of this comment)
> The Drata agent is a pretty innocuous thing.
Can anyone even verify that? Is it open source?
Last time I allowed one of these "agents" into my computer, I discovered it was running in kernel mode and intercepting every single network connection.
> Because otherwise they would fail SOC2 and managing your legal status as "Freelancer" vs "Employee" (for tax reasons??) is not worth not being certified.
It depends on the size of the company this person is working with. If it is large enough to be procuring Drata it means the people asking this dude to install the software is probably in a vastly different part of the company org structure as the teams our friend is working with. In addition, the "drata team" might not have yet considered their policy for contractors--in fact the "drata team" probably wouldn't even be the ones to draft a policy for that. It could be their legal teams who do that.
The solution really depends on how much of a fuss the poster wants to make of it. Personally I would be working with my clients inside that org to figure out a way to make their IT & legal department happy. If the poster has a good open relationship with their client, they'll find some solution. Perhaps the client loans the poster a dev workstation or something (which I'm sure can be structured in a way that doesn't fail the "contractor vs employee" tests).
I'm going to piggyback on your comment because it's one of the more reasonable and informed takes here.
I'm currently in the middle of our company's first evaluation window for SOC2 Type 2.
I'm not familiar with Drata, but at a surface-level, it sounds pretty similar to Vanta, who we use.
OP says "The motivation is that my client badly want a SOC 2 certification", which sounds about right. If anyone isn't familiar with SOC2, it's generally not something that you start out wanting or caring about, but eventually you get some big potential customers and they tell your sales people that they can't sign the contract unless you can show them your SOC2 attestation. Then you scramble to figure out what it is and what you have to do to get it. Depending on how your business works, it's the kind of thing that can very quickly go from "we don't know what that is" to "the future of our company depends on this". If you're a small team without experience in the area, it's a total mess and hard to understand exactly what's required. Signing up with someone like Vanta or Drata to walk you through the process and provides a bunch of tools to tick boxes and get you through an audit with a minimal amount of manual work and ambiguity (though there will still be a lot of that).
SOC2 tends to be very vague on actual technical controls and is more focused on the documentation of whatever controls you have set for yourself and gotten your auditor to agree are reasonable. You're going to have a very hard time getting most auditors to agree to something less strict than "devices that are used to access production systems or could otherwise touch sensitive data must have encrypted drives, be password protected, and kept up to date with security patches". If you have a traditional IT setup and provide hardware to all of your employees, you can probably generate some documentation showing how you enforce that policy. It's trickier in remote setups, BYOD environments, or with contractors/freelancers. Your two options are basically to have people install an agent like Dasta or Vanta's or to require them to upload screenshots of all the relevant settings on their devices somewhere on a regular basis (typically monthly), and then have an admin check them and sign off. That second approach isn't hard, but it tends to be very labour intensive and annoying as well as very easy for someone to forget and produce gaps/exceptions that then have to be explained to your auditor.
In our case, we're a fully remote company and fully BYOD (you get a hardware stipend but we don't really have an IT department so we're not in a good position to manage peoples' devices for them; we do strongly encourage people to use separate devices for work and personal). We completely understand the reticence towards installing a 3rd party agent on their own machines, so we give our employees the option of using Vanta's agent or doing the monthly screenshot thing. Boy is the screenshot thing a pain in my butt and don't I wish everyone would just use the agent. In the future, it might push us to change our BYOD policy and instead supply managed devices (but I'm not crazy about that approach either for other reasons).
One piece of feedback we've given Vanta at every opportunity (and I assume would apply to Dasta as well) is that we'd have a much easier time getting adoption of the agent among our developers if they'd make it open source so anyone with privacy concerns could audit it themselves. So far we haven't gotten any indications that they're moving in that direction. FWIW, reverse engineering and spying on the Vanta agent with eBPF and other tools to try to catch it doing something it shouldn't has become a bit of a side hobby of mine (it's mostly a wrapper around OSQuery and I've been able to log all the queries that it makes and not yet found anything nefarious, but absence of proof isn't proof of absence).
IMO, it's completely reasonable for the OP, as a freelance contractor, to refuse to install an agent on their personal machine and instead provide screenshots/etc as evidence. They say "Just for the record: I don't have credentials to production systems, and I don't work with production data.". If that's really true, then that should be fine. We have freelancers who do certain things for us (eg, market research), and if they don't have access to production systems/data, it's very straightforward for us to classify them that way and exclude them from the various secure development controls. Though they may not fully understand the scope of "production systems", which could include things like Github repos which contain code that gets deployed to production (SOC2 auditors want to see that the whole development lifecycle is secure so a compromised developer laptop couldn't be used to push out a backdoor without leaving a very obvious trail).
This is a great comment. But I'm going to push back on your last paragraph, because it is not completely reasonable for a contractor to say "I'll supply screenshots instead of running this agent". Screenshots work for your team because you set up and documented a process for managing them, and then taught your auditors about it. This contractor's client might not --- probably didn't! -- do that work. It may be logistically tricky for them to do so after the fact if they're already doing consistency audits; also, regardless of where they're at, it might not be worth building and documenting and teaching a whole new screenshot collection policy just to placate a contractor (it will doubtlessly cost more for them to do that than to simply supply the contractor with a company laptop for the duration of their project).
For what it's worth: a nit I like to pick with Vanta is that it sets a very ambitious bar for what a company should be doing with respect to IT security, where SOC2 does no such thing. I worry that things like Vanta lead teams into doing all sorts of stuff that might not be a fit, and certainly isn't required to pass a Big 4 SOC2 audit. What was your experience there?
(I ask because SOC2 is sort of looming over us, though obviously it's not something we're jumping to do preemptively).
> Screenshots work for your team because you set up and documented a process for managing them, and then taught your auditors about it.
Not really. We had no process before talking to the auditors. We told them that we didn't want to require employees to install the Vanta agent (for reasons mentioned) and asked them what they recommended. They said that screenshots would be fine. On a lot of these SOC2 things, I think people should just talk to their auditors early in the process and get a sense of what they are looking for and care about. There are some standards, but all of them are going to have a slightly different focus and the ones we've worked with have all been pretty reasonable about understanding the particulars of our company and what exceptions make sense for us.
> For what it's worth: a nit I like to pick with Vanta is that it sets a very ambitious bar for what a company should be doing with respect to IT security, where SOC2 does no such thing. I worry that things like Vanta lead teams into doing all sorts of stuff that might not be a fit, and certainly isn't required to pass a Big 4 SOC2 audit. What was your experience there?
I can't say it's really been a problem for us. Vanta and our auditors have both been pretty clear that it's not 100% necessary to have all tests passing in Vanta in order to get our SOC2 (again, it's helpful to just talk to your auditors). We run entirely on the Cloud (no physical offices or data centers) and honestly, some minimal GCP/AWS best practices and modern deployment approaches (protected branches, code reviews, standardized CI/CD) means that you're already passing about 90% of Vanta's tests on those things. We had to do a few silly things like change our resource labelling conventions to match Vanta's but otherwise nothing felt terribly burdensome or like security overkill.
Terrific follow up, thanks.
I agree with your final paragraph 100%
And good luck with your evaluation!
Right, I setup SOC2 compliance processes for a small startup, and we didn't have money to buy all those fancy automation programs. We managed ourselves with recurring JIRA tickets and screenshots taken by personnel.
I think the only service we had to pay was for security awareness training, and it was a site that provided security awareness videos..
This is what a lot of companies do for SOC2. There's a cottage industry of consultants and product vendors selling companies on the idea that SOC2 is difficult and needs bespoke automation, but plenty of companies get by with just Jira. For that matter: you probably didn't need to spend money on security awareness videos.
Ya, i bought knowbe4 because we also needed it for PCI compliance, and it made things easier for the IT/Sec team.
But you are right. All those ceryification companies are a mafia.
> "The poster says "Their business model (in my case) seems to be to take money from companies to spy on their employees/contractors, and then they sell the employees/contractors private information to "targeted advertising".
> Do you have any evidence for this?? I've just been involved in selecting Drata as a vendor for SOC2 compliance planning for our company. If this is true it's a huge deal and totally against my understanding of their business model. It honestly sounds like bullshit to me! But if you have evidence that they do this, please let us know."
Do you have any evidence that they don't? If so, can you describe a few bits of your investigative process and how you reached the conclusion? For the sake of alleviating people's fears and to clear up the nonsense.
That’s not how burden of proof works.
OP made a claim that Drata collects private employee info and resells it. That’s a large claim.
It’s on OP to justify that position, not on the parent to rebut the opposite.
> OP made a claim that Drata collects private employee info and resells it. That’s a large claim.
What I know is:
1) They collect mandatory private information. They already know my name and my email, and they used that information to ask me to complete some tasks on their website. I don't know what those tasks are, because I first have to accept their TOS, which contains clauses that is referred to, but not disclosed. I declined.
2) Their "webpage" is part of their service. This is where I supply personal information (presumably more than they already have). So unless they have another TOS, after I accept the publicly available one, that's the rules: I have to give them personal data, because my employer (well, in my case it's my customer, not my employer - Drata have no agreement with my employer) signed a contract with them.
3) They give themselves the right, in the websites TOS, to sell my data.
So: 1 + 2 + 3 = Drata collects private employee info and resells it.
IIUC, "Burden of proof" (as a concept) is meaningful in fora where there are agreed-upon standards of evidence. E.g., a courtroom or a formal debate.
My impression is that HN conversations are less regulated. Someone can make a claim, and each reader can decide how much merit it deserves, and if/how to discuss it further.
It is a large claim. I'd say there's a bit of burden on both sides as the parent called it out as nonsense. They're implying they know for a fact that it doesn't breach privacy/integrity.
Burden of proof doesn't really matter. If this company wants people to trust its software "agents", they should be making every effort to convince them. Free software goes a long way.