Fallout 4 backed out while mod was downloading






















I wanted to add more rules, like the ones mentioned in the article, but never had the time to do it. But not only has it no tight coupling with GitHub it also doesn't require you to use git, you can use whatever version control you want and at worse you don't get support for "detect dirty repository". Similar git tags are fundamentally unreliable as you can always "move" some to any arbitrary commit. So IMHO the problem here is relying on code you didn't got from github which might not even use git to match a arbitrary tag on something on github which might not even be from the same author but e.

But uploads to crates. PragmaticPulp 4 days ago parent next [—]. Most cargo deps aren't specified as Git deps, so there's no repository or commit hash to check. You can publish from a local folder. Checking the hash of the crate still verifies that the code hasn't changed, I just want to clarify that there isn't some mechanism by which non-git deps are automatically verified back to the VCS repo commit hash they came from.

It may be possible to publish from a local folder and offer the user no history or context. However, I think it makes sense to privilege packages which can be cryptographically verified as deriving from a specific commit within an open source commit history, in a public repository at a specific URL.

IMHO the idea that source code in any place outside of crates. Even if you make it technically work there are endless ways this could be abused in a social engineering context.

Like even if you when uploading check the code is the same, how do you check it stays the same. How do you check it's the same for any way the site is accessed e. How do you make sure there is not other social engineering in place misleading tags etc. You can't, reliable, in a good way. Also don't forget, the client which uploads thinks can't be trusted. So you would need to push a full git maybe sparse branch to the server instead of just uploading a subset of the current checkout, that's pretty much a no-go for various reason.

Through it's not hat there are no ways to reach something similar like e. How this "labels" then interact with the UI and if they are even stored on crates. I'd almost say that publishing from a nonpublic repository or no repository is a misfeature.

At least when there is no good way to rapidly audit the source of crates on crates. Most people will go looking for a public git repo, but there's often no quick way to be certain that it's the same code as was published on crates. It's less about a non-public repository and more about a non-git repository.

I hope you don't mean "using the code preview of the hoster" when you wrote "rapid audit". My implication with "rapid audit" was that crates. Of course the best thing to do would be to look locally at the source of the crate that was actually downloaded by cargo. That's what I do. But you have to recognise that most people are going to weight convenience quite highly, and are going to be lazy when they should be thorough, and so on, so if you want to improve the security and trust in the ecosystem as a whole, you should try to make the safe path as low-friction as possible.

The docs. Right now I think there's no way to see the source of build. Why isn't that the default? Diggsey 4 days ago root parent next [—]. The latest release of foo is 1. I invoke "cargo build. Now foo releases 1. I invoke "cargo build". Cargo uses the lockfile, and nothing changes. I still build with 1. Now, let's say I go in and change my Cargo. This is where the behavior differs: 'cargo build' will say "oh, you've changed your Cargo.

It doesn't imply that Cargo ignores the lockfile by default, only that it will update your Cargo. Sure, though in this example, with the versions I listed as released, there is nothing later, so it's the same in this case.

Treating PGP signed commits as privileged and only pointing at them as opposed to mutable tags seems like it would help. I'm not sure it's actually possible, for varying reasons. Even if we ignore that cargo can be used with other version control systems I see some problems: - Validation must be done server side as everything the client does can be manipulated.

The reviewer manually? What you maybe could do is e. You also could include some version id e. But really the most important thing is to review the code you actually use e. In the abstract, what I'm wishing for is definitely possible at least for some subset of packages. Diggsey 4 days ago prev next [—]. There's going to be a risk to running someone else's code.

There are two factors here: 1 Do I trust the code I think I'm running. With 1 there's not really any way around it: someone or something has to review the code in some way. Even the suggestion to have a larger standard library doesn't really address it: with a larger standard library the rust project needs more maintainers, and it might just get easier to get vulnerabilities into the standard library.

Someone could build a tool that automatically scans crates uploaded to crates. It could look for suspicious code patterns, or could simply figure out what side-effects a crate might have, based on what standard library functions it calls, and then provide that information to you. For example, if I'm looking for a SHA crate, and I notice that the crate uses the filesystem, then I might be suspicious. With 2 there are some easier options, such as making it easier to download or browse the contents of a crate directly from crates.

For initially installing the crate, the number of downloads is a pretty good indicator of "is this really the crate I meant? As an example, the proof-of-concept proc macro attack from the article could be addressed by running proc macros in a wasm sandbox. Similarly, all of the attacks that execute code at compile time are mostly addressed currently by building code in a Docker container. Sure, you can and we absolutely should sandbox the build process itself.

I meant "running other people's code outside of a sandbox", and should have specified that. The problem is that at some point you actually want to run the code you compiled, and then proc-macro exploits can still do whatever. I didn't say they were unsolvable, I said that solving them requires someone or something to review the code.

That's the only way you can gain trust that code does what it says it does. I even suggested some possible "meaningul half-measures" that could be implemented. That will stop the compromise on build, but odds are then you run the code anyways so injecting the code into the executable is almost as good.

I guess if you are building an application that is always run in a sandbox anyways like a wasm application that never sees sensitive data then sandboxing proc macros could be good enough but I suspect that is a very rare case. My comment specifically called it out as a half measure. But if cargo login is only run on the CI instance responsible for publishing and developers test their code on their individual dev machines, the attack fails.

Rather, we should be concerned with stopping as many different attacks as we can and providing users with as many building blocks as possible to let them protect themselves. There will always be vulnerabilities. The sandbox doesn't protect against injection of malicious code. There's zero backdooring involved anywhere in this article.

His most convincing argument seems to be "if your account as a package maintainer is hijacked then bad things could happen" -- well yeah, thanks for the insight Sherlock. I'd be genuinely excited to read objective and deeper analyses of the Rust ecosystem in which I am looking to invest myself further. I want to know what exactly I am getting involved with so I'd welcome any good criticisms of it.

But not click-baity articles with almost zero substance inside. He's basically repeating old lists of risks of human error. He describes compile time backdoors Rust macros and backdoors that can be triggered by loading the code into a standard development environment standard hooks called by code editors on load. Both of these are novel. Name squatting and account compromise are common to several ecosystems, but using a compromised account to backdoor a transient dependency that automatically executes code when your CI system compiles, allowing it to steal the tokens you use to publish builds to your customers, while you are asleep, is somewhat novel and worrying.

And then as a bonus, compromise your laptop when you start inspecting the rubble to work out what just happened. Neither of these are novel.

The idea of running untrusted code at build-time is not remotely new. The part about triggered by loading code into a standard dev environment is identical to "my dev environment automatically compiles dependencies for me", and is something that is handled by the dev environment e.

VSCode's "trusted workspace" stuff rather than the language. Rust is not responsible for IDE tooling running build scripts and compile-time macros. Yes, you will be running untrusted code but you downloaded it and all the transient dependencies and had the opportunity to audit it or realize you could be using the prebuilt OS packages instead and choosing to trust the those maintainers. I think Python shares the same issue as Rust in that the standard tooling will by default pull down newer versions of packages and building them can run arbitrary code.

I don't think Go does, as I don't think the potentially dangerous code generation comments are run by default? Modern systems use a lockfile to pin the versions, and they only get updated as a result of direct action either explicitly updating dependencies, or declaring a new dependency that requires a newer version of the transient dependency.

You can choose to audit the new dependency versions based on the lockfile change at your discretion, just like if you have to update e. Thank you for the breakdown. I was under the impression that all these techniques are rather known e.

Homebrew also allows for random script executing by the mere virtue of installing a seemingly unrelated package but I might have been wrong.

It seems like your options are carefully auditing all dependencies difficult and maybe impossible if the dependencies are highly technical or the malicious code is sufficiently subtle or obfuscated every time you update, not updating at all which leaves you vulnerable to all the bugs and other security issues in the version you choose to pin , or not using dependencies at all by spending months or years totally rewriting the libraries and tools you need, and of course your own code will have bugs too.

Fixing the points addressed in this article helps by making it harder to slip these backdoors in, but will never be foolproof unless every single library has a maintainer with the skills to detect subtle bugs and security issues, who audits every line of code.

Even then the marketplace for unreported zero day vulnerabilities means that there are probably undiscovered vulnerabilities somewhere in your dependencies or in the code for your IDE or OS or Spotify app or mouse driver Sandboxing your development in a codespace like Gitpod is a big improvement for sure, but even in Gitpod a lot of people import credentials and environment variables that can be stolen.

And what dependencies is Gitpod itself running? I think we have, as an industry, for a long time not seen the true value proposition of "Linux distributions". They do quite some boring and tedious security auditing, for example review setuid binaries to the point they drop from root into user privileges; and they backport security patches, so security updates are binary compatible drop in replacements.

When a binary distribution is widely used, the beneift is shared bug fixing and hardening, the disadvantage is somewhat dated libraries. It's a model I understand. There's nothing fatalistic about this. Running a business with potential liabilities is different from having a high school programming hobby. If you're using community distributions of open source software in a security-critical context e.

If some rando came up to your contractor and offered them free concrete for use in your foundation, and the contractor said yes without any due dilligence, you would have every right to sue that contractor out of existence.

The www isn't a wild west anymore. The era where any middle schooler can build a six figure business by serving as the middle man between open source packages and end-users should probably come to a close. And I say that as someone whose middle school software freelancing business cleared lots of revenue by the end of college.

I wonder if this could be a revenue model of OSS. Cyber insurance providers should probably stat weighing in on these supply chain issues soon. Groxx 4 days ago parent prev next [—]. Libraries are, in practice, treated as black boxes. I think that's largely reasonable - that's almost the whole point of leveraging someone else's work. I think that's completely ridiculous. The one thing I like about the Rust crates ecosystem is that crates have "tags" along the lines of "nostd" and the like.

That's a good idea, but it wasn't taken anywhere near far enough as a policy. For example, did you know that a "Rust" crate can be A lot of them are. Some typo squatter basically wrapped a bunch of libraries like "libpng" and whatnot. Behind the scenes, all of these "Rust crates" are invoking cc and won't even build if you're on certain Rust toolchains!

There's no indication of this on crates. Obviously, the transitive dependencies would then bubble up. So if a crate that says "portable" has a dependency that changes from "portable" to "Linux-specific", then the crate should lose the "portable" tag. The crates. For every update. Tens of thousands of people doing this work over and over. I'd like to see this done properly, once , centrally, in a controlled and hard-to-spoof way.

Security is not just about memory safety. Groxx 4 days ago root parent next [—]. And raising awareness of use of these less-than-palatable practices tends to lead to a constant pressure to only use them when strictly necessary , generally leading things to a broadly safer end state.

A good number that I've run across boil down to "this coding pattern is easier with a dash of unsafe". Sometimes that's valid! But that's pretty strongly "guilty until proven innocent" in my book. Verifying it on all past and future architectures is far beyond my skill, and I suspect that's true of many of the people writing libraries that are using it too.

This seems like a great idea. It doesn't even need to happen all at once. Starting with limiting what can be done by code run at compile time or editor load time would pull the teeth of the novel attacks in the article. Later, more interesting things like 'no files with unsafe code that haven't been audited' could be tackled.

But there it can be so much better, or so much worse. Worse is Node. Long chains of transitive dependencies.

I love Rust. But I have always thought having the compiler download dependencies is a very bad idea. It would be much better if the programmer had to deliberately install the dependencies. Then there would be an incentive to have less dependencies. This is currently a shit show, because it is easier to write than read, to talk than listen. New generations of programmers refuse to learn the lessons of their forebears and repeat all their mistakes, harder, bigger, faster reply.

There is also the option of having trusted third parties review code. This is by no means an easy option but it does seem more feasible than everyone auditing every line of code they ever depend on.

You do end up with spicy questions like who do we trust to audit code? Why do we trust them? How are they actually auditing this code?

The big problem is not bugs, not vulnerabilities, but malicious code inserted deliberately into packages published by attackers. One way to detect malicious code is line-by-line code review of published packages, but that's extremely laborious, even when done by third parties. What we really want to do is confirm that the package was the end product of an open source commit history, where commits were reviewed by a set of trusted authors hey look third parties!

That involves strong validation of publisher identities and cryptographic validation of the package contents to connect it to a commit history in a trusted public repository. Another option is reducing number of dependencies.

Doesn't cure the problem but can cut the vulnerability surface by orders of magnitude. Qualcomm has announced that it's changing the naming structure of its Snapdragon chips for smartpho…. Genuinely great Apple iPhone deals can be hard to come by, even on Black Friday, so this decent sav….

New renders have appeared alleging to show off the hotly rumoured Google Pixel 6a. Google latest…. Forza Horizon 5 is available to play right now, with critics calling it one of the best racing game…. The latest Trusted Recommends is finally here and once again we have a number of tech treats for yo….

Here are all the details on how…. Warner Bros. Apple has rolled out a new iOS Microsoft has issued its November Xbox firmware update, bringing with it a series of accessibility,…. Now, the A…. Qualcomm has revealed that it will still be supplying its Snapdragon chips to around half of Samsun….

Motorola has just announced its latest line of mid-tier smartphones, including three 5G models and …. Hope this helps. Per page: 15 30 Date Posted: 30 Jun, pm. Posts: Discussions Rules and Guidelines. Note: This is ONLY to be used to report spam, advertising, and problematic harassment, fighting, or rude posts. All rights reserved. Discover how the practitioners of the pseudo-science eugenics have taken control of governments worldwide as a means to carry out depopulation. View the progress of the coming collapse of the United States and the formation of the North American Union.



0コメント

  • 1000 / 1000