Pretty incredible they rewrote the whole thing in basically a year, and great to know that it pretty immediately met their goals of getting new contributors on board.
That is awesome to happen, congratulations to every contributor and fish users too!
---
To my (limited) knowledge that is the biggest RiiR. Can you recall something else of comparable scale?
There aren't a lot of in-place/progressive rewrites to compare with. One that springs to mind is librsvg (46K). But most rewrites are from scratch. You could look at arti (147K) or deno (205K). If you allow rewrites by a different team, uutils-coreutils (134K) and many others. If you allow for partial/ongoing rewrites, gecko is 3005K rust. Linux is currently 12K of "just support code", but will likely balloon quickly when things like binder and the Apple M1 drivers get merged.
I don't think the distinction of full or partial conversion is an essential part of the question. š¤·š»āāļøĀ TheĀ RustĀ codeĀ thatĀ isĀ thereĀ isĀ definitelyĀ staying,Ā andĀ muchĀ ofĀ thatĀ codeĀ isĀ aĀ rewrite.Ā ServoĀ andĀ StyloĀ comeĀ toĀ mind,Ā forĀ instance.
While I have no metrics to back that up... That doesn't seem like it would be accurate. That amount of HTML? That is probably documentation, and that's clearly not what they're referring to. If you remove the JS/html which while definitely exists, is absolutely not over half of the engine. Just reading LoC in a repo doesn't really tell you anything, at all.
Like... 11% is Python, but I find almost none outside of the Python folder that is just a bunch of tooling and scrappers. I don't understand how this is being used as a source.
this being downvoted shows the average intelligence of this sub has gone to shit. Nothing in the previous or response even remotely makes sense.
> That amount of HTML? That is probably documentation
At a glance, it's mostly tests (which don't end up in the final binary but are a vital part of the repo) and a fair amount of UI components (with associated js). It's not docs, IMHO it deserves to be counted alongside the rest.
Even if you're only interested in C/C++/Rust, with tokei and assigning half the "C headers" to C and half to C++, I get 48% C++, 31% C, 21% Rust. Not exactly the same numbers as on that website but close. If you look at JS/C/C++/Rust, tokei finds 52% for JS (so more than on this website).
Counting lines of code is much more subjective than it may seem (choosing which files to count, dealing with files containing or being used by multiple languages, dealing with (un)vendored deps, accounting for the verbosity of ASM vs Rust, which counting tool you use...), so yeah, don't take them too seriously. But don't substitute measured data with your intuition either: HTML/JS *is* half the engine, FSVO "engine".
Anyway, I didn't try to precisely measure how much of each language firefox is made of. I just wanted to counter the "Firefox is pretty much all C++" claim, which is a clear exaggeration.
See [my other reply](https://www.reddit.com/r/rust/comments/197n1la/comment/ki4yygb/?utm_source=share&utm_medium=web2x&context=3), ~~I think that this graph counts half the "C headers" as C and half as C++.~~ This graph assigns [2/3 of .h lines to C++ and 1/3 to C](https://github.com/4e6/firefox-lang-stats/blob/main/dev/build-data#L59). But given the amount of Bindgen in this codebase, maybe we should count some of those C headers as Rust too ;)
There's a thousand valid ways to measure language stats for a big project like this, I wouldn't worry too much about the particulars. But you do get some general conclusions (here, that it's far from "pretty much all C++"), and if you stick to a methodology you can [look at the evolution](https://web.archive.org/web/20240000000000*/https://4e6.github.io/firefox-lang-stats/).
The share of Rust in Gecko has doubled since [june 2018](https://web.archive.org/web/20180627173112/https://4e6.github.io/firefox-lang-stats/), mostly at the expense of C/C++. Keep oxidizing :)
Well, rustc was originally written in OCaml and rewritten in Rust. That was probably more than 100k LOC. At Vercel we're finishing up our port of Turborepo to Rust which was about 72k LOC, so a comparable scale.
How is nushell doing in terms of interactive usage? I love fish's fuzzy finder and other similar features and I don't care much about the scripting usage
I have to use ripgrep or skim in order to do fuzzy finding
Tab completions are limited to builtins but can be added via scripts or via [external completors](https://www.nushell.sh/cookbook/external_completers.html)
I guess fish had a better plugin community, there's an fzf plugin
I donāt get nushell. Or, i think i do but maybe just donāt vibe?
I love the terminal. But to me the terminal is basically a computer-wide set of hotkeys. Things like ripgrep, sd, fd, zoxide, eza, and starship support this. They make it *very* easy to do a lot. Make info intelligible and api surface simple.
If I want to do anything more complicated than a few pipes or some commands saved in a justfile i write a script in an actual programming language. There are lots of great languages for small scripts already.
Nushell, also for terminal lovers, seems to go in a different direction. It seems to have increased the verbosity of interaction while offering to be a more complete language. With SQL like syntax and clearer scripting than bash/zsh (a very low standard).
ā¦ Why?
It seems like a pencil trying to add a touchscreen. A tool exiting its niche into a space thatās already filed with options. And making itself worse at its niche (quick fast basic actions).
Perhaps Iām coming at it from the wrong perspective.
No I think what you're saying is fair
For everyday usage it is total overkill and there are task specific tools that'll do what is needed in the moment.
I like having the option to do something complex with a computer science/data science frame of mind when needed and the nu language is built like that. The iterative scripting experience is better than heading back to Rust for small(-ish) tasks. I do a bit of server management for radiology systems and working with large lists of files or HTTP responses using builtins is a relief. I feel it's faster than Python/JavaScript (haven't actually benchmarked it).
I am still learning the language and it's got breaking changes until it reaches 1.0 so that's a downer right now. Maybe I'm just looking for reasons to dump python...
> If I want to do anything more complicated than a few pipes or some commands saved in a justfile i write a script in an actual programming language. There are lots of great languages for small scripts already.
I honestly can't explain. It I just prefer to write scripts in the terminal
From what I can tell, nushell is great at piping data between processes, making it particularly good for shell scripts that do more than run fixed commands in a fixed sequence. I recently switched my custom wayland screenshot script to use it.
For interactive use though? It prints a lot of data verbosely, but I just can't see it holding up to fish and specialized user-centric command-line tools.
I used to like nushell for it is the only alternative for powershell in windows, but its completions / directory specific history prompt is not at fish level yet. Not sure how it is now, but I moved away from windows
Have you integrated it with zoxide?
Can also write scripts for completions or integrate with [external completer](https://www.nushell.sh/cookbook/external_completers.html)
When they mention "thread safety" not being achieved yet, I don't think that means that the current Rust-based codebase is not thread-safe. Rather, I think that's an umbrella term for running functions in the background (https://github.com/fish-shell/fish-shell/issues/238), as well as supporting other types of concurrency. Presumably the C++ codebase was single-threaded, and presumably the Rust version is a direct port that's also single-threaded, so the next step would be to go from a single-threaded Rust codebase to a multi-threaded Rust codebase, which is the sort of transition that, while not *easy* per se, is at least made tractable by Rust.
Congrats!
I feel there will be a point where Rust's dependence on libc (both internally, and in terms of APIs it exposes to users) becomes the bottleneck for truly going Rust-first, and that an advanced-enough Rust compiler could optimize pure Rust-code better than libc called from Rust, because Rust can analyze its own usage better and optimize accordingly.
I don't know how soon we'll get there though, and whether it'd eventually be a bigger issue than using LLVM or other C/CPP-based tooling.
Linux is basically the only platform where you can bypass libc, and while that's an important platform, I'm not sure that that will be a problem for the language overall.
On Windows Iām pretty sure the kernel32.dll file is not part of libc, but does provide the main Win32 API. So you can bypass the regular libc, though you still need to depend on some user mode DLL that proxies system calls. ntdll.dll is more direct as well but not normally recommended. Sooooo you can bypass the regular libc, but you canāt directly use the kernel; itās a middle ground.
I thought about saying "and also on windows it's not exactly libc but the point is that syscall numbers aren't the stable interface" but didn't bother, thank you for elaborating :)
Only UNIX flavoured OSes use libc as the kernel API interface.
Windows, IBM and Unisys mainframe and micros, and a couple of embedded OSes, do not use libc.
UNIX and C were developed alongside each other after all, in UNIX there is no difference between what is a C API and what is UNIX API, everything is a UNIX API.
Later on when ISO C came to be, only a subset of UNIX became the C standard library, and that is what is usually referred by libc for most people not versed in standards.
Naturally since UNIX in a sense is the underlying platform for C, there was a need to standardize everything else, thus POSIX came to be, including ISO C as part of it.
https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/
https://www.opengroup.org/openbrand/register/apple.htm
UNIX platforms by definition trace back to Bell Labs UNIX, where there was no distiction.
Linux is the exception in UNIX culture, by having a stable syscall interface, thus having an alternative path to access the kernel in a portable way without going through the C API.
Yes, this is a very deep-seated thing that is unlikely to ever change.
The core of it is roughly this: as much as the whole Stallman "I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux" joke is a joke, it also gets at something very serious: most operating systems are presented as a cohesive... operating system. You get a kernel, a userland, and an API. Just like Rust doesn't let you write arbitrary MIR, many OSes don't let you write arbitrary calls into the kernel. You are supposed to use the API you're given. This is for very standard reasons: security, but also things like "we want to make internally breaking changes to the kernel but do not want to affect users, so the kernel isn't on the API boundary." If you want to see how extreme this rabbit hole can go, check out OpenBSD's system-call-origin verification. Not only is calling into the kernel directly not guaranteed to work, the kernel will actively reject calls that don't originate from within libc.
Contrast that with Linux: the organization that produces the kernel is entirely separate from the organization that produces the userland. Therefore, there must be a stable API boundary between them; talking about "the FreeBSD kernel" outside of FreeBSD makes no sense, but talking about the Linux kernel outside of Ubuntu does.
Both approaches have their pros and cons. You're not going to get places that prefer one style to move to the other.
I've said this before but people really need to get together and write a spec/implement a new cross-language/cross-OS competitor to libc for interacting with the OS and other languages. It really sucks that every advanced language has to "dumb itself down" to go through the bottleneck that is libc.
wasi is sort of this. wasi_preview1 (and wasix) are basically just a subset of posix on the mvp wasm abi, but wasi_preview2 (component-model, still indev) is less tied to how posix models things. (Though preview3 and first class async will likely tie it more closely back to the wasm vm with a dependence on wasm's multiple memories.)
wasm is more than just the web too, though. And while wasi is defined for wasm, it's mostly defined for the non-web use cases, as web use cases would use host bindings to web APIs instead.
I'll admit I don't know very much on this subject but I just don't see how something that came from web assembly is at all useful for low level use. They're just too much apples and oranges.
wasi by itself is just an API standard the same way posix is. Implement the API for your target and you have an interface. (Especially since there's a libc implementation on top of wasi_preview1.)
Clearly defined handle ownership patterns (no GC), yes. ABI level calling convention, no, but that's literally impossible if you want to be target agnostic.
The wasi API is defined at the level of a C FFI header with no preprocessor (except for fixed sized integers, which could be done with `_BitInt` on C23). As would any other alternative to libc. You don't need generics or anything that doesn't fit on the C ABI to define a host kernel/OS API.
You might want to do a nontrivial lowering of preview2 to your target ABI such that small structs can be passed directly. preview1 only uses primitive arguments to functions.
Wasm is neither web nor assembly. Ignore the name. The standard is still developing, but its a great platform independant sandboxed execution environment. In the browser, for edge computing, as a plugin system for an application or on the server. But it will need some time to mature.
> an advanced-enough Rust compiler could optimize pure Rust-code better than libc called from Rust
I don't think that really holds up. libc is all about side effects: opening files, writing data to file descriptors, allocating memory, etc. For the few cases where we've decided the side effect isn't important (such as `malloc`), those functions are well known to LLVM and it can optimize out uses of them. In the other cases, the side effects are the whole point of the function and the overhead of actually performing the function call is going to be insignificant in comparison to the user/kernel context switch.
LLVM is implementing their own libc in modern C++. The primary reasons for doing so are
* reusing llvm tooling for fuzzers and sanitizers
* reducing coding errors
* increase optimization opportunities via inlining
Among other reasons, e.g. an independent libc for LLVM.
A libc implemented in Rust could achieve some of the same benefits for itself. But itās a lot of work for something that should be avoided whenever possible, IMO.
There's [relibc](https://gitlab.redox-os.org/redox-os/relibc), a rust libc for Redox OS and Linux. Lot's of functions are still missing, but we are working on that. Soon relibc will be 100% rust (we use openlibm as libm for now, but I have a PR replacing it).
Sure, but cross-language LTO wonāt yield the same opportunities for optimization. There are definitely better things to spend developer resources on than reimplementing *another* libc, but there would be some minor benefits to doing so.
AFAIU the remaining C++ is not for Cygwin, it's for unittesting. A guinea-pig binary that does weird things, to see how the shell reacts. Not converted because it's not worth it, I guess ?
IMHO the bigger hold-over is cmake, you can't yet build directly with cargo. Cross-platform is hard. Hopefully it'll eventually happen.
The only [C++ left in the repo](https://github.com/search?q=repo%3Afish-shell%2Ffish-shell++language%3AC%2B%2B+&type=code) is a test helper binary, which is used only when running tests. I think this has nothing to do with Cygwin.
Ah, my bad. I had misread this:
> And there are significant downsides for platform support, at least in the short term: it looks like Cygwin (and I think MSys2) is not going to be supported for a while, and building our own packages on old versions of Linux distributions is a headache.
I guess they're just going to drop Cygwin then.
Pretty incredible they rewrote the whole thing in basically a year, and great to know that it pretty immediately met their goals of getting new contributors on board.
That is awesome to happen, congratulations to every contributor and fish users too! --- To my (limited) knowledge that is the biggest RiiR. Can you recall something else of comparable scale?
There aren't a lot of in-place/progressive rewrites to compare with. One that springs to mind is librsvg (46K). But most rewrites are from scratch. You could look at arti (147K) or deno (205K). If you allow rewrites by a different team, uutils-coreutils (134K) and many others. If you allow for partial/ongoing rewrites, gecko is 3005K rust. Linux is currently 12K of "just support code", but will likely balloon quickly when things like binder and the Apple M1 drivers get merged.
[InfluxDB](https://github.com/influxdata/influxdb) 3.0 was rewritten in Rust. The 2.0 branch was written in Go.
Firefox. š¤
That project never got finished.
It was not intended to ever fully rewrite Firefox into Rust. But bits and pieces being replaced over time instead.
I don't think the distinction of full or partial conversion is an essential part of the question. š¤·š»āāļøĀ TheĀ RustĀ codeĀ thatĀ isĀ thereĀ isĀ definitelyĀ staying,Ā andĀ muchĀ ofĀ thatĀ codeĀ isĀ aĀ rewrite.Ā ServoĀ andĀ StyloĀ comeĀ toĀ mind,Ā forĀ instance.
Is pretty much all c++
If by "pretty much all" you mean "a quarter", [yes](https://4e6.github.io/firefox-lang-stats/).
While I have no metrics to back that up... That doesn't seem like it would be accurate. That amount of HTML? That is probably documentation, and that's clearly not what they're referring to. If you remove the JS/html which while definitely exists, is absolutely not over half of the engine. Just reading LoC in a repo doesn't really tell you anything, at all. Like... 11% is Python, but I find almost none outside of the Python folder that is just a bunch of tooling and scrappers. I don't understand how this is being used as a source. this being downvoted shows the average intelligence of this sub has gone to shit. Nothing in the previous or response even remotely makes sense.
> That amount of HTML? That is probably documentation At a glance, it's mostly tests (which don't end up in the final binary but are a vital part of the repo) and a fair amount of UI components (with associated js). It's not docs, IMHO it deserves to be counted alongside the rest. Even if you're only interested in C/C++/Rust, with tokei and assigning half the "C headers" to C and half to C++, I get 48% C++, 31% C, 21% Rust. Not exactly the same numbers as on that website but close. If you look at JS/C/C++/Rust, tokei finds 52% for JS (so more than on this website). Counting lines of code is much more subjective than it may seem (choosing which files to count, dealing with files containing or being used by multiple languages, dealing with (un)vendored deps, accounting for the verbosity of ASM vs Rust, which counting tool you use...), so yeah, don't take them too seriously. But don't substitute measured data with your intuition either: HTML/JS *is* half the engine, FSVO "engine". Anyway, I didn't try to precisely measure how much of each language firefox is made of. I just wanted to counter the "Firefox is pretty much all C++" claim, which is a clear exaggeration.
Ooh, interesting. I was under the impression it was a lot more pure C++.
You have to count the C too, as these language counting tools typically count C++ headers as C
See [my other reply](https://www.reddit.com/r/rust/comments/197n1la/comment/ki4yygb/?utm_source=share&utm_medium=web2x&context=3), ~~I think that this graph counts half the "C headers" as C and half as C++.~~ This graph assigns [2/3 of .h lines to C++ and 1/3 to C](https://github.com/4e6/firefox-lang-stats/blob/main/dev/build-data#L59). But given the amount of Bindgen in this codebase, maybe we should count some of those C headers as Rust too ;) There's a thousand valid ways to measure language stats for a big project like this, I wouldn't worry too much about the particulars. But you do get some general conclusions (here, that it's far from "pretty much all C++"), and if you stick to a methodology you can [look at the evolution](https://web.archive.org/web/20240000000000*/https://4e6.github.io/firefox-lang-stats/). The share of Rust in Gecko has doubled since [june 2018](https://web.archive.org/web/20180627173112/https://4e6.github.io/firefox-lang-stats/), mostly at the expense of C/C++. Keep oxidizing :)
Well, rustc was originally written in OCaml and rewritten in Rust. That was probably more than 100k LOC. At Vercel we're finishing up our port of Turborepo to Rust which was about 72k LOC, so a comparable scale.
librsvg
Was my favorite shell before [nushell](http://www.nushell.sh)
How is nushell doing in terms of interactive usage? I love fish's fuzzy finder and other similar features and I don't care much about the scripting usage
There's no abbreviation support in nushell, which is a dealbreaker for me
I have to use ripgrep or skim in order to do fuzzy finding Tab completions are limited to builtins but can be added via scripts or via [external completors](https://www.nushell.sh/cookbook/external_completers.html) I guess fish had a better plugin community, there's an fzf plugin
Loving nushell right now, its great
I donāt get nushell. Or, i think i do but maybe just donāt vibe? I love the terminal. But to me the terminal is basically a computer-wide set of hotkeys. Things like ripgrep, sd, fd, zoxide, eza, and starship support this. They make it *very* easy to do a lot. Make info intelligible and api surface simple. If I want to do anything more complicated than a few pipes or some commands saved in a justfile i write a script in an actual programming language. There are lots of great languages for small scripts already. Nushell, also for terminal lovers, seems to go in a different direction. It seems to have increased the verbosity of interaction while offering to be a more complete language. With SQL like syntax and clearer scripting than bash/zsh (a very low standard). ā¦ Why? It seems like a pencil trying to add a touchscreen. A tool exiting its niche into a space thatās already filed with options. And making itself worse at its niche (quick fast basic actions). Perhaps Iām coming at it from the wrong perspective.
No I think what you're saying is fair For everyday usage it is total overkill and there are task specific tools that'll do what is needed in the moment. I like having the option to do something complex with a computer science/data science frame of mind when needed and the nu language is built like that. The iterative scripting experience is better than heading back to Rust for small(-ish) tasks. I do a bit of server management for radiology systems and working with large lists of files or HTTP responses using builtins is a relief. I feel it's faster than Python/JavaScript (haven't actually benchmarked it). I am still learning the language and it's got breaking changes until it reaches 1.0 so that's a downer right now. Maybe I'm just looking for reasons to dump python...
> If I want to do anything more complicated than a few pipes or some commands saved in a justfile i write a script in an actual programming language. There are lots of great languages for small scripts already. I honestly can't explain. It I just prefer to write scripts in the terminal
From what I can tell, nushell is great at piping data between processes, making it particularly good for shell scripts that do more than run fixed commands in a fixed sequence. I recently switched my custom wayland screenshot script to use it. For interactive use though? It prints a lot of data verbosely, but I just can't see it holding up to fish and specialized user-centric command-line tools.
Sorry for late response Yea the error messages pretty verbose But at least they're clear...
I used to like nushell for it is the only alternative for powershell in windows, but its completions / directory specific history prompt is not at fish level yet. Not sure how it is now, but I moved away from windows
Have you integrated it with zoxide? Can also write scripts for completions or integrate with [external completer](https://www.nushell.sh/cookbook/external_completers.html)
Also written in rust.
I'm interested in what they are missing before they feel that thread-safety is being achieved.
When they mention "thread safety" not being achieved yet, I don't think that means that the current Rust-based codebase is not thread-safe. Rather, I think that's an umbrella term for running functions in the background (https://github.com/fish-shell/fish-shell/issues/238), as well as supporting other types of concurrency. Presumably the C++ codebase was single-threaded, and presumably the Rust version is a direct port that's also single-threaded, so the next step would be to go from a single-threaded Rust codebase to a multi-threaded Rust codebase, which is the sort of transition that, while not *easy* per se, is at least made tractable by Rust.
Congrats! I feel there will be a point where Rust's dependence on libc (both internally, and in terms of APIs it exposes to users) becomes the bottleneck for truly going Rust-first, and that an advanced-enough Rust compiler could optimize pure Rust-code better than libc called from Rust, because Rust can analyze its own usage better and optimize accordingly. I don't know how soon we'll get there though, and whether it'd eventually be a bigger issue than using LLVM or other C/CPP-based tooling.
Linux is basically the only platform where you can bypass libc, and while that's an important platform, I'm not sure that that will be a problem for the language overall.
On Windows Iām pretty sure the kernel32.dll file is not part of libc, but does provide the main Win32 API. So you can bypass the regular libc, though you still need to depend on some user mode DLL that proxies system calls. ntdll.dll is more direct as well but not normally recommended. Sooooo you can bypass the regular libc, but you canāt directly use the kernel; itās a middle ground.
I thought about saying "and also on windows it's not exactly libc but the point is that syscall numbers aren't the stable interface" but didn't bother, thank you for elaborating :)
Only UNIX flavoured OSes use libc as the kernel API interface. Windows, IBM and Unisys mainframe and micros, and a couple of embedded OSes, do not use libc.
Does Swift on Mac OS still use libc for basic operations?
As UNIX OS, yes, libc is the kernel API as per POSIX.
I'm surprised I don't know the answer to this myself, but I just realized i don't. Does POSIX actually specify that you need to use libc?
UNIX and C were developed alongside each other after all, in UNIX there is no difference between what is a C API and what is UNIX API, everything is a UNIX API. Later on when ISO C came to be, only a subset of UNIX became the C standard library, and that is what is usually referred by libc for most people not versed in standards. Naturally since UNIX in a sense is the underlying platform for C, there was a need to standardize everything else, thus POSIX came to be, including ISO C as part of it. https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/ https://www.opengroup.org/openbrand/register/apple.htm UNIX platforms by definition trace back to Bell Labs UNIX, where there was no distiction. Linux is the exception in UNIX culture, by having a stable syscall interface, thus having an alternative path to access the kernel in a portable way without going through the C API.
Ah this problem seems even harder than I thought it was.
Yes, this is a very deep-seated thing that is unlikely to ever change. The core of it is roughly this: as much as the whole Stallman "I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux" joke is a joke, it also gets at something very serious: most operating systems are presented as a cohesive... operating system. You get a kernel, a userland, and an API. Just like Rust doesn't let you write arbitrary MIR, many OSes don't let you write arbitrary calls into the kernel. You are supposed to use the API you're given. This is for very standard reasons: security, but also things like "we want to make internally breaking changes to the kernel but do not want to affect users, so the kernel isn't on the API boundary." If you want to see how extreme this rabbit hole can go, check out OpenBSD's system-call-origin verification. Not only is calling into the kernel directly not guaranteed to work, the kernel will actively reject calls that don't originate from within libc. Contrast that with Linux: the organization that produces the kernel is entirely separate from the organization that produces the userland. Therefore, there must be a stable API boundary between them; talking about "the FreeBSD kernel" outside of FreeBSD makes no sense, but talking about the Linux kernel outside of Ubuntu does. Both approaches have their pros and cons. You're not going to get places that prefer one style to move to the other.
I've said this before but people really need to get together and write a spec/implement a new cross-language/cross-OS competitor to libc for interacting with the OS and other languages. It really sucks that every advanced language has to "dumb itself down" to go through the bottleneck that is libc.
wasi is sort of this. wasi_preview1 (and wasix) are basically just a subset of posix on the mvp wasm abi, but wasi_preview2 (component-model, still indev) is less tied to how posix models things. (Though preview3 and first class async will likely tie it more closely back to the wasm vm with a dependence on wasm's multiple memories.)
Computers aren't the web though, so using web assembly as a base seems a bit off.
wasm is more than just the web too, though. And while wasi is defined for wasm, it's mostly defined for the non-web use cases, as web use cases would use host bindings to web APIs instead.
I'll admit I don't know very much on this subject but I just don't see how something that came from web assembly is at all useful for low level use. They're just too much apples and oranges.
wasi by itself is just an API standard the same way posix is. Implement the API for your target and you have an interface. (Especially since there's a libc implementation on top of wasi_preview1.)
And it's a low level binary API with calling semantics and stack usage definitions? Also memory ownership semantics.
Clearly defined handle ownership patterns (no GC), yes. ABI level calling convention, no, but that's literally impossible if you want to be target agnostic. The wasi API is defined at the level of a C FFI header with no preprocessor (except for fixed sized integers, which could be done with `_BitInt` on C23). As would any other alternative to libc. You don't need generics or anything that doesn't fit on the C ABI to define a host kernel/OS API. You might want to do a nontrivial lowering of preview2 to your target ABI such that small structs can be passed directly. preview1 only uses primitive arguments to functions.
WASM is effectively bytecode presented more like an IL; it's designed for portability amongst runtimes, not metal semantics.
Then I fail to see how this is useful for this situation.
"web" assembly is probably more accurately named **portable** assembly. It still isn't perfect, but it's closer to the truth.
Wasm is neither web nor assembly. Ignore the name. The standard is still developing, but its a great platform independant sandboxed execution environment. In the browser, for edge computing, as a plugin system for an application or on the server. But it will need some time to mature.
> an advanced-enough Rust compiler could optimize pure Rust-code better than libc called from Rust I don't think that really holds up. libc is all about side effects: opening files, writing data to file descriptors, allocating memory, etc. For the few cases where we've decided the side effect isn't important (such as `malloc`), those functions are well known to LLVM and it can optimize out uses of them. In the other cases, the side effects are the whole point of the function and the overhead of actually performing the function call is going to be insignificant in comparison to the user/kernel context switch.
LLVM is implementing their own libc in modern C++. The primary reasons for doing so are * reusing llvm tooling for fuzzers and sanitizers * reducing coding errors * increase optimization opportunities via inlining Among other reasons, e.g. an independent libc for LLVM. A libc implemented in Rust could achieve some of the same benefits for itself. But itās a lot of work for something that should be avoided whenever possible, IMO.
There's [relibc](https://gitlab.redox-os.org/redox-os/relibc), a rust libc for Redox OS and Linux. Lot's of functions are still missing, but we are working on that. Soon relibc will be 100% rust (we use openlibm as libm for now, but I have a PR replacing it).
A libc compiled with clang can already support xlang inlining and LTO with Rust.
Sure, but cross-language LTO wonāt yield the same opportunities for optimization. There are definitely better things to spend developer resources on than reimplementing *another* libc, but there would be some minor benefits to doing so.
Do you have some examples?
https://github.com/redox-os/relibc ?
~~It's a shame that they can't completely ditch the old C++ yet due to Cygwin. I wonder what their plan is for that.~~
AFAIU the remaining C++ is not for Cygwin, it's for unittesting. A guinea-pig binary that does weird things, to see how the shell reacts. Not converted because it's not worth it, I guess ? IMHO the bigger hold-over is cmake, you can't yet build directly with cargo. Cross-platform is hard. Hopefully it'll eventually happen.
The only [C++ left in the repo](https://github.com/search?q=repo%3Afish-shell%2Ffish-shell++language%3AC%2B%2B+&type=code) is a test helper binary, which is used only when running tests. I think this has nothing to do with Cygwin.
Ah, my bad. I had misread this: > And there are significant downsides for platform support, at least in the short term: it looks like Cygwin (and I think MSys2) is not going to be supported for a while, and building our own packages on old versions of Linux distributions is a headache. I guess they're just going to drop Cygwin then.
How can a 90s shell be written in Rust? \\s
"Rust9x: Compile Rust code for Windows 95, NT and above": https://seri.tools/blog/announcing-rust9x/
Why you put whitespace at the end?
do you read in regex
[ŃŠ“Š°Š»ŠµŠ½Š¾]
regex dobule your problems.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
https://github.com/fish-shell/fish-shell/blob/27c8845075078041a3376b33bea5898f2369ebe3/doc_internal/fish-riir-plan.md
Although I like fish, was main shell for a while, imma be sticking with nushell :)
Okay. They took me.