Hacker Newsnew | past | comments | ask | show | jobs | submit | thoran's commentslogin

On the other hand, when you do have categories in a language and not the smartest people in the language's community, you do have real problems: You end up with conflicting definitions of homonymic methods because every one wants his own "toJson" crap. And everyone tries to monkey patch every other components. Ruby is plagued by this disease.

Haxe and Scala (and probably many others) have a nice solution to this problem. You can add "methods" to existing classes but only you can see them. You never really change the semantic of something that does not belong to you.


Ruby is hilarious because importing a library in the standard library (or importing a library that uses it) can break division: https://bugs.ruby-lang.org/issues/2121


No, you're wrong. This is not a theoretical issue there, but a practical one.

Almost all mathematics (and this certainly includes Wile's proof) could be written in Coq, in theory. It is extremely hard to do in practice.

Mathematicians write proof for their peers who have a smart brain. Most of the trivial and less than trivial details are omitted. Coq cannot not accept this (because he is very stupid and can't figure the missing steps). It turns out that it is particularly hard and boring to fill the missing holes in a "human" proof.

Gödel comes in when you try to prove Coq's correctness within Coq (but this was partially done, in a sense)


I want to make sure I understand correctly. :)

I am right on the fact that rewriting Wile's proof in a machine-checkable language is a problem in itself? But I'm wrong when I suggest it is theoretical issue, when it's actually practical issue?

I've read that "the theory behind Coq is generally admitted to be consistent with regard to Zermelo-Fraenkel set theory + inaccessible cardinals". [1] I also read that some statements are undecidable in Zermelo–Fraenkel set theory [2]. It follows from this that it must exist some statements that are undecidable in Coq, correct? But I understand that this is theoretical issue, not a practical one, because "Zermelo–Fraenkel set theory, combined with first-order logic, gives a satisfactory and generally accepted formalism for essentially all current mathematics". Is it the reason why my comment was wrong?

On this topic, I've read an article titled "Computer verification of Wiles' proof of Fermat's Last Theorem" which explains that Wile's proof could probably be verified with a tool like Coq, but that would be a massive challenge:

> The mathematics used in Wiles' proof of Fermat's Last Theorem is very complicated. [...] I do not know the provers Coq and Mizar good enough, but I think they are adequate to express the mathematics.

> On the other hand, I do think that the challenge is doable, and that it will be done in the coming fifty years. It is definitely not within reach. The project will have to cover large parts of current pure mathematics. Its scope is comparable with the Bourbaki project, where between 1950 and 1970 almost all of pure mathematics got a new foundation by the efforts of an originally French group of mathematicians who published some 20 books under the pseudonym N. Bourbaki.

[1] https://coq.inria.fr/faq [2] http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_th... [3] http://en.wikipedia.org/wiki/Hilbert%27s_program [4] http://www.cs.rug.nl/~wim/fermat/wilesEnglish.html


It's my fault, I think I read your answer too quickly and miss a negation somewhere.

So, yes, a formal rewriting of Wiles' proof is "only" a practical challenge. But a big one. Just about 15 years ago, we were just able to prove the Fundamental theorem of algebra in Coq (a 200 years old theorem that is routinely taught to undergraduates).


That's okay @thoran! Thanks a lot for having helped me gaining a better understanding of this topic.


> It follows from this that it must exist some statements that are undecidable in Coq, correct?

Yes, but only those that are undecidable in ordinary mathematics. If you're not using Coq, you have exactly the same problem. (And this concern doesn't apply if we already have the proof and just want to check it).


> Yes, but only those that are undecidable in ordinary mathematics. If you're not using Coq, you have exactly the same problem.

Yes, this is what I understood. Thanks for confirming this point.

> (And this concern doesn't apply if we already have the proof and just want to check it).

Why this concern would not apply anymore if we already have the proof and just want to check it? The proof still needs to be expressed, directly or indirectly, in terms of Zermelo–Fraenkel set theory combined with first-order logic. If the proof uses a different and "incompatible" theory, how could we check it with Coq?

(I understand this is a very rare occurence in practice and this is the reason why ZFC is so much used.)


> The proof still needs to be expressed, directly or indirectly, in terms of Zermelo–Fraenkel set theory combined with first-order logic. If the proof uses a different and "incompatible" theory, how could we check it with Coq?

If Wiles' proof wasn't in ZFC we wouldn't call it a proof (at least not without qualification). You don't get to just write mathematical proofs in any language you like - otherwise we could pick a language in which FLT is an axiom, and then the proof of it is trivial.


I don't get it. There is a lot of variants of set theory. ZFC is just one of them, albeit the most common. But some proofs are founded on an extension of ZFC. For example, the Mizar system, which is another proof assistant, has adopted Tarski–Grothendieck set theory, an extension of ZFC.

According to "The QED Manifesto Revisited" [1], it can be difficult to use Coq as-is for some kind of proofs:

> Here are four mathematical statements that most mathematicians will consider to be totally non-problematic: [...] We claim that currently none of the QED-like systems can express all four statements in a good way. Of course one can easily extend the systems with axioms that allow one to write down these statements. However, that really amounts to ‘changing the system’. It would mean that both the library and the automation of the system will not be very useful anymore. Classical & extensional reasoning in Coq or abstract algebra in the HOLs by postulating the necessary types and axioms will not be pleasant without re-engineering the system.

[1] http://mizar.org/trybulec65/8.pdf


> I don't get it. There is a lot of variants of set theory. ZFC is just one of them, albeit the most common. But some proofs are founded on an extension of ZFC. For example, the Mizar system, which is another proof assistant, has adopted Tarski–Grothendieck set theory, an extension of ZFC.

Some mathematicians accept extensions of ZFC. But I would definitely expect a proof that relied on such an extension to have in big red letters "we use these additional axioms", and the mathematical community would hope for and expect a proof without it. (Indeed proofs that make use of the axiom of choice still often flag this up and we look for proofs without it where possible). I am quite surprised that Mizar would do this, and certainly hope that it provides a way to distinguish between proofs that are valid in ZFC and those that aren't.

> According to "The QED Manifesto Revisited" [1], it can be difficult to use Coq as-is for some kind of proofs:

The statement about aleph-0 and aleph-2 (the one that Coq has problems with) isn't really mathematics so much as metamathematics/proof theory, at least if we're viewing it as a statement about the consequences of axioms. Which sure is a valuable field and one we'd like to be able to reason about, but it's circumscribed and doesn't affect normal working mathematics.

If we're talking about accepting that axiom and actually using it in a proof, then it's certainly not "totally non-problematic". The paper admits that most mathematicians wouldn't even know the statement of the proper forcing axiom. They certainly wouldn't write proofs that depended on it.


Thanks a lot @lmm. Now I get it :)

About Mizar, I don't know if you can use it without the additional axioms of Tarski-Grothendieck set theory.


On MacOS, you would distribute a library as a 'Framework' (a package containing the binary shared object, headers, documentation, resources, sub frameworks...). In Xcode, you can quickly add a framework dependency. And if you need to modify the framework too, you can use “sub projects” (the sub project build the framework you are using in the super project). With recent Xcode versions, it works reasonably well. In this world, you don't need CocoaPods at all. Everything is nicely compositional. You would have expected that they had extended this mechanism when they introduced the iPhone?

But they manage to totally screw that. First of all, they drop the support for Frameworks!! So if you want to ship a cross platform (MacOS/iOS) library, you must maintain two targets (one static lib for iOS (and good luck to manage its inner resources), one framework for MacOS). You also have to be careful to have compatible architecture between the sub project and the parent project, a problem that is aggravated by the multiplication of iOS architectures (armv 5, 6, 7, 8, 9, ...). Apparently, forwarding the required architectures from a parent project down to its dependant sub projects is a very difficult problem...

Then, the community feels the need to fix this mess to have a working environment where you can easily share your work: CocoaPods was born.


Why is CocoaPods better than git submodules?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: