instead of calling fib (or fib_gen) explicitly. fself is passed into the generator and it will need to be passed in again and again. That's what Y gets you. You could get tracing the way you suggest from something like:
let fib f n =
let n' = (* compute fib for n *) in
f n'
and supply some f that will log (and return) the computed values, but that won't work with the motivating example: memoization. This really should have been clearer in TFA.
It's worth noting the memoization in the linked gist isn't pure. memo() allocates a cache and returns a function that references it. The cache is updated by mutating in place. After all the hoops we've jumped thorugh it's a bit of a let-down to end up here. (It's doable though.)
> new generations never heard of those languages and think C was the very first systems programming language.
That's just it. Rust looks enough like C++ to catch on. This was not an accident though I can't say I'm jazzed about it.
Rust brings a lot to the table with its type system too. I would rather be using Rust tomorrow than having had the benefits of say Pascal derivatives for the past 45 years. I'm not saying this just for effect: after the obvious ML influence, the parts of Rust I like most are the nuances.
> Instead, many Rust users seem to sprinkle it everywhere.
Is that true? You will necessarily see it in bindings, or to implement certain essential features that can't be implemented otherwise. Beyond that there is the motivation to use unsafe code for speed, hopefully this is mostly restricted to modules.
Are you complaining that people are using when they could reasonably avoid it or that it is too often necessary?
> Are you complaining that people are using when they could reasonably avoid it or that it is too often necessary?
They are abusing it, specially as workarounds for constraints that cannot be expressed in the type system.
While in the languages I mentioned, unsafe constructs are related any operation that might lead to memory corruption, some Rust devs are using it for anything they assume isn't logical safe.
So you then get talks like the Session Types one at ICFP, that uses unsafe to control functions being called outside the protocol Traits that are being defined.
> They are abusing it, specially as workarounds for constraints that cannot be expressed in the type system.
That doesn't sound like abuse to me, as long as the exposed interface remains safe.
> some Rust devs are using it for anything they assume isn't logical safe.
That sounds like an objection to the scope of what Rust defines as safe. Unless you mean to imply there are reasonable, Rust-safe ways to rewrite the transgressions. In the former case I have to disagree. It would be nice if we could prove more. But you can't prove everything.
I haven't seen the ICFP talk but I might watch it later if you can link to a video. I've skimmed that discussion before but you will need to point out your objections—IMO it is great that Rust devs are having that discussion.
My objection is that I see unsafe just for memory integrity.
For everything else something like contracts should be used, from my point of view.
As for voicing my objections to the Rust community, being a language geek is one of my hobbies, but it doesn't mean I will ever use those languages at work.
So I would only contribute noise to the discussions, which why I eventually moved away from Go and D forum discussions as well.
My time is already quite filled following the JVM, .NET and C++ eco-systems, the ones I get payed to be an expert on.
> My objection is that I see unsafe just for memory integrity.
Either you think this way because of a sense of orthodoxy or because you think memory safety is important in a way the other kinds of safety aren't. I disagree, especially since "memory safety" may or may not imply protection against aliasing of mutable data. Those nuances I like about Rust probably wouldn't exist without the broader scope of safety concerns.
> As for voicing my objections to the Rust community,
About the discussion you linked? I meant to me, in the service of explaining your objections.
> My time is already quite filled following the JVM, .NET and C++ eco-systems, the ones I get payed to be an expert on.
Well, that was important information. Thanks for making the time to share!
> Either you think this way because of a sense of orthodoxy or because you think memory safety is important in a way the other kinds of safety aren't. I disagree, especially since "memory safety" may or may not imply protection against aliasing of mutable data. Those nuances I like about Rust probably wouldn't exist without the broader scope of safety concerns.
Memory safety is more fundamental than most other sorts of correctness: it is a necessary (but not sufficient) condition for them. If you can't assume memory safety, you have no assurances about any other invariants, because any piece of data can be modified/outdated/invalidated at any time in the program's execution. (This is especially bad if the compiler will itself assume that memory safety can never happen, and use this undefined behaviour to optimise the program in unintended ways.)
I'm fairly sure that some definitions of memory safety that works for Rust are "there are no use-after-frees" or "the language is type-safe"; all 'other' problems with memory safety (iterator invalidation, data races, etc.) can be used to violate those and hence need to be outlawed. I'm not sure there's that much wiggle room for what memory safety means, at least in broad terms. In any case, aliasing of mutable data can definitely lead to memory unsafety, and is hence covered/controlled by Rust's guarantees. In some ways, it is really the fundamental cause of memory unsafety: remove/control either "aliasing" (e.g. Rust, many GC'd languages) or "mutable" (e.g. Haskell) and you've got memory safety.
Also, Rust definitely doesn't only try to help the programmer with memory safety, it's just memory safety (and hence type safety) is the form that has hard guarantees. Other forms are generally handled by "lints"/warnings or by building libraries on top of the type system.
To be clear, we on the Rust team see 'unsafe' as being strictly about memory safety as well, and have actively fought against using it as a general "pay attention here" kind of annotation.
Eventually, effects would be nice to mark other kinds of "here be dragons" as well, but for now, unsafe == possible memory unsafety.
If you read the comment you'll see that I'm talking about different interpretations of memory safety. You can't just decide what it means in Rust applies to all languages, or that everyone knows what you mean.
I was trying hard to have a conversation while avoiding arguing about semantics, so thanks for that.
So your objection is to how certain users (notably not the core dev team) utilize "unsafe", now with the language itself. It seems like the easiest way to mitigate this is with better documentation; surely, those other languages' equivalents of "unsafe" could also have been abused similarly, no?
> To me, green threads are just an implementation detail. The fact that green threading is being used shouldn't leak into the interface.
I think that mentality, which I supported, may be what ruined green threads for Rust. Perhaps the enemy was the perfect of the good. On the other hand, maybe not having to worry about differences is the better benefit. I would rather be slower-but-acceptably-fast than have to think about 1:1 vs M:N threading in the case that they are not interchangeable.
On the third hand, it could be that Rust is too close to the metal for green threads to be of any significant benefit. If that's the case, Rust ought to trounce all the M:N implementations, subject to reasonable limits on spawn rate, memory pressure and inter-thread communication. Does that sound right?
Incidentally, I've never seen a cooperative threading implementation that didn't have a yield call of some kind available, and you can usually jam any of them up by writing an infinite computation-only loop, so even in the langauges where the scheduler is automatically invoked, correct 1:1 code can deadlock in the M:N scenario. State machines can too, but a trivial example, translated, might read something like the following:
for(;;);
exit(EXIT_SUCCESS);
I've seen code like this written by complete newbies, but threading constructs can hide that relationship even from experts. In parallel code it isn't a sequential relationship, but it is with cooperative threads. I guess this is all fairly pedantic (such a minor detail, mostly affecting only poorly-written programs), but in the end, one of: yield(), sched(), sync(), etc. does show up in practically all of the cooperative threading interfaces.
Blocking at the request level is where it's at: you don't download the content, you don't execute the scripts, you don't interact with the DOM. You block requests to third party servers or resources based on lists or other policies. This saves you load time, execution time, and it keeps you from being tracked by targeted ad systems.
If people start bypassing the DOM with canvas we will have a bigger problem than cosmetic filters failing: we will no longer be served the same kind of document. Or we will, but the content will be in an opaque region of it. If that happens the web is dead. People tried this before with flash and that smoldered for a while in the 90s but didn't last, so I'm not feeling all doom-and-gloom about it.
Aside: Standardized document formats are important. If rendering to canvas takes on as the defacto standard, it would cripple the web technologically.
Request level blocking isn't going to work when advertisers finally figure out that embedding into the actual document (like a data: URL today) is much harder to block. As the entire point of WebAssembly is to make a small download that is efficient in parse-time and size, it will be practical to embed images (less overhead than a base64'd data: URL).
As for bypassing the DOM, it's a question of when they do that, not if; for evidence, a lot of people tried that with flash already, and lots more try things like trying to block UserAgent context menu. Yes, these were ineffective, but they were (and still are) used by a lot of businesses.
Yes, I think we have a huge problem, because hoping that nobody decides to abuse technology for personal gain doesn't work. With new technologies like WebAssembly, we shouldn't be caring about technical benefits, but instead asking how the technology will be abused. If we give everybody tools to serve a non-document, then we shouldn't be surprised when non-documents are used. The only way to prevent this is to restrict the UserAgent to standards that can only be implemented as an open format, which is what HTML/CSS tried to do.
If you're not feeling the doom-and-gloom, you might want to watch PHK's "Operation Orchestra" talk, and consider that there are a lot of businesses that would love to serve up something closer to cable TV than the current environment where the user can control the rendering.
> Stuff like SmallTalk wouldn't be possible if programming was turned into math
> and frankly I don't understand why mathematicians want to take over a completely different domain
I would like to challenge the assumption here: that programming (in whatever mode) can't be adequately described by mathematics. I don't have time to make a thorough argument, but consider this: instead of thinking of what programming would be like as some kind of math, think of what math would need to be like to adequately describe your favourite mode of programming. (Forget arithmetic, consider algebra, geometry.)
As for FP: these days it wants all the tools, and it is picking up whatever it can formalize. Some FP tools are uncommon or don't work well outside the paradigm. To me that makes all the difference: I'd feel more restricted working without FP than by being strictly within it.
>I would like to challenge the assumption here: that programming (in whatever mode) can't be adequately described by mathematics.
Well, the entire lesson of the Halting Problem and Rice's Theorem is that it can't. Indeed, Haskell loves to just cut through the bullshit about math and be inconsistent: bottom inhabits every type, and Type inhabits Type (in Dependent Haskell).
No fixed mathematical system is capable of formally verifying all programs, or even all interesting programs. You always need more bits of axioms than of program if you want to prove theorems about your programs (Chaitin's principle, proved formally by Calude).
What we can say for FP is that Fast and Loose Reasoning is Morally Correct, and that if we use nicely categorical constructions, then when our programs happen to terminate at all, those nice constructions describe their behavior.
We don't actually need to butt into the Halting Problem or Rices Theorem, though.
Ideally, we would have proven-to-terminate non-turing complete DSLs that can be composed together to form more powerful DSLs. The problem is that we don't really know how to compose programs well, and when you demand the ability to peer into arbitrary executing code and modify it on the fly, you don't make it easier to learn how to do this.
If you were forced to use an architecture that separated things properly, you'd be forced to deal with those composition problems, rather than 'hacking' programs using a level of power that is unnecessary for the problem you hope to solve. (And you are hoping to solve a problem, not just run a program that fails to terminate.)
(I'm not arguing that this is the 'right way' to do things. It's just a way that should be considered properly.)
> >I would like to challenge the assumption here: that programming (in whatever mode) can't be adequately described by mathematics.
> Well, the entire lesson of the Halting Problem and Rice's Theorem is that it can't.
And in mathematics, division by zero is undefined as well as not all differential equations have analytical solutions. Does that engender a lesson to abandon the rest of what the field provides?
> What we can say for FP is that Fast and Loose Reasoning is Morally Correct, and that if we use nicely categorical constructions, then when our programs happen to terminate at all, those nice constructions describe their behavior.
Not sure what you are trying to convey with the first "premise" you state, but I can say with certainty that everything after "and that if we use..." can be applied to just about any approach in making a program.
Structured Programming:
If we use nicely decomposed procedures, then when our
programs happen to terminate at all, those nice
procedures describe their behaviour.
Object Oriented Programming:
If we use nicely encapsulated classes, then when our
programs happen to terminate at all, those nice objects
describe their behaviour.
Declarative Programming:
If we use nicely declared rules and capabilities, then
when our programs happen to terminate at all, those
nice declarations describe their behaviour.
And (your example quoted here):
If we use nicely categorical constructions, then
when our programs happen to terminate at all,
those nice constructions describe their behavior.
That's a little extreme. I think most computations are best¹ written in a way that is amenable to mathematical treatment, and most of the useful ones can be. Just because a counter-example exists doesn't mean we should give up hope.
I find constructing programs mathematically makes life easier in the common case and possible in nearly all others, unless you insist on things like proof-of-termination for programs which do not (I do not.)
1. 'best' because it gives us the means to reason about the code, and we know what kind of transformations we can perform on it, and do them abstractly without having to simulate it, potentially for infinite inputs, etc., etc. You know the drill.
That's a very strange interpretation of Rice's theorem. Just because properties of computable functions are undecidable doesn't mean they somehow "can't be described by mathematics". I'm also not sure why incompleteness is relevant here at all, unless you think describing something mathematically requires a complete formal system. In that case, we can't "describe" arithmetic "mathematically".
It's not that strange. We can write all kinds of program analyses that are decidable, but which are necessarily conservative, and we end up deciding what sort of conservativeness (which side we want to err on) in different situations. What this does mean is that you can't really write a Grand Unified Formal Verification framework and get some wonder-of-wonders benefit just by switching to functional programming.
I've also spent enough time around Haskell coders to notice that the instant you give them a fresh new language feature, they will push it right out to the limits where the compiler's conservative analyses no longer work and the programmer has to manually assert that the code is correct.
Nothing will really save the programmer from having to think clearly about their own code, and most language features designed to ostensibly help do so just enable compiler-abuse at a higher level of abstraction.
That's correct in theory, but in practice I'd be very surprised to see a useful program that couldn't be verified by ZFC. The only natural way to get such programs is by encoding metamathematics, like trying to verify the consistency of ZFC itself.
>That's correct in theory, but in practice I'd be very surprised to see a useful program that couldn't be verified by ZFC.
I think you do too much easy theory if you think that, if you'll excuse my saying so. A lot of the firmware code I write at work is just not going to be formally verifiable in any practical way without scrapping it and rewriting in a language built around formal verification from day one.
Besides which, a Python interpreter is a very common sort of program to write, actually, and I don't expect that you can "just" do the equivalent of `Require Import ZFC` to verify its behavior properly.
Point taken. Isn't that just a limitation of current tools, though? What stops ZFC from proving sentences about Python interpreters, or Brainfuck programs, or whatever? I'm not aware of any theoretical roadblocks, apart from the difficulty of actually writing out the proof (which is admittedly huge).
>What stops ZFC from proving sentences about Python interpreters, or Brainfuck programs, or whatever?
Well, mostly a lack of writing down axioms about how these things actually work. The innate nondeterminism, imperative operating environment, and intentional behavior (in the sense that which implementation of a function you write actually matters) also give things that are, let's say, on the research frontier to reason about formally.
And then there's just the fact that ZFC isn't a very good language at all for programming with. Type theories are both proof-theoretically stronger and easier to write something at all like actually programs in.
Also, things like compilers and program analyzers are very common in "real-world" programming. They're just about the cutting edge of what we can do with formal verification today: it helps a lot that, as I understand things, when you write down an inductive type in a dependently-typed proof assistant, you are also writing down axioms (an induction principle) with which to prove theorems about it, and the dual for coinductive types. Then, between coinductive and inductive proofs that rely on either productive nontermination or eventually-terminating programs, we can write down most of what we actually want to code.
With the exception, of course, of proof assistants, where we can still only write a verified implementation of one system using a strictly stronger system. Gosh, if only someone was working on ways to tie that knot...
Bingo. But most people also like to kvetch and moan about the restrictions imposed by programming with only structural recursion and productive corecursion -- even though those are usually exactly what we want.
Especially if you get an easy way out for when you need it.
Like unsafePerformIO, but for calling not-proven-to-be-terminating functions. Or an explicit marker like the wrapping monads for side-effecting functions.
As someone that has to dip into exotics like relevance logic just to somehow get closer to the mode my brain operates in while programming, either the math is seriously undeveloped/undecidable in areas of my interest, or just not capable of much help in what I set to achieve. And I have extensive knowledge in functional programming including proving correctness using Smullyan's tableaux on the fly all the way from very primitive axiomatics up to complete arithmetic in predicate logic. Often in those exotic logic kinds (which I deem necessary) trivial things aren't provable, yet they seem to model actual human reasoning much better and that is my main interest. With computing I am able to actually do some work in this area, with math, due to aforementioned limitations, not so much.
Most mathematicians can't see beyond binary predicate logic :-(
I think it's a canary. How many languages do you hear this said about? I've only heard it in regards to Go which is fairly pedestrian (read: looks and acts something like C / Java) as far as these things go.
More exotic languages, which take the uninitiated longer to acclimate to, don't seem to be trying to tell people they're not allowed to think deeper about and question the design of the language. Good criticisms of Go were made on day 1 and are being made to this day, Rob Pike's ego notwithstanding.
At the end of the day, if I see a problem with a developing language I am interested in, I'm going to report it. If I'm met with: "I don't know you but I'll assume you don't know what you're talking about" that's a second problem and I'm not about to double-down on it.
"I think it's a canary. How many languages do you hear this said about?"
In general, I see "This new language I'm learning sucks because it needs to work more like the language I know best" for all kinds of languages. "Python needs to use braces instead of indentation." "$X needs to handle errors more like $Y." (lots of things can be filled in there!) Complaints about the dynamic/optional/static typing in a language. "Haskell may be pretty but isolating IO effects like that is impractical, it ought to work more like most languages."
FYI the quote above refers to the develeoper/community response to criticisms/ideas (eg. see the comment I replied to), not the criticisms/ideas offered by newcomers.
I don't know what it's like now. Rob Pike in particular made a lot of noise like this early on. There was a lot of parrotting of that, so it was hard to have a serious conversation. It seems like it's still getting play but I don't know how widespread that is today.
Fwiw I avoid writing both loops and recursion as much as possible. Collection data structures usually come with their own fold/map/etc functions, so this works well in practice.
I'm a big fan of this work. For the PDF-shy, there's some excerpted text on LtU: http://lambda-the-ultimate.org/node/5216 but you'll miss some of the goodies, like the graph of a scenario from Romeo and Juliet, showing the actors act concurrently at different locations.
There is an emphasis on using Ceptre for developing interactive fiction¹ (text adventures/parser games.) From my perspective, a pain point in developing truly creative interactive fiction is the tools available tend to impose a world model on your work. I'm excited for Ceptre because it lets you write causal relationships directly and that makes starting from scratch a more realistic proposition.
1. With some work I think it could be (and should be) used in other kinds of games, such as sandbox games or anything that would benefit from a living world, even if only in part if the main story must be nailed down and deterministic.
In case anyone is wondering which project this is:
The concrete contribution of the research report here
is the design and implementation of a fully compartmen-
talized operating system, MINIX 3. To properly isolate
faults, we have removed all drivers from the kernel and
run them as separate, unprivileged user-mode processes,
protected by the MMU hardware. Since all servers also
run in user mode in our design, only a tiny microker-
nel that does not contain any foreign, untrusted code is
left in kernel mode. Each component has only the mini-
mum privileges it needs in order to prevent failures from
spreading. In our design, driver failures are no longer
fatal and do not require rebooting the computer.
It's worth noting the memoization in the linked gist isn't pure. memo() allocates a cache and returns a function that references it. The cache is updated by mutating in place. After all the hoops we've jumped thorugh it's a bit of a let-down to end up here. (It's doable though.)
(Edit: improve clarity.)