Hacker Newsnew | past | comments | ask | show | jobs | submit | somejan's commentslogin

Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance. Java being memory safe is a huge part of that, which is the JVM platform. Java the language had the major feature of being superficially similar to C++, that helped take over the business market but is now no longer relevant. When Java was first marketed as a business application language there were few competitors. Nowadays there are, but nowadays the major reason to choose Java is because it is entrenched and good enough.

However all the Java shops I hear about are also looking into Kotlin at some level of adoption. From all the new JVM languages Kotlin integrates the best with the existing Java environment and for displacing a language in an existing niche an easy upgrade path is the most important. So I think Kotlin will become an important language in the JVM ecosystem, it will just take a long time because these types of businesses are conservative in their tech choices.


> Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance

I fail to think of any other platform that could run these monstrosity CRUD enterprise apps as fast as the JVM can. Sure, C++ can be written to utilize hardware better, but with all the classes and interfaces around with everything being virtual, a good JIT compiler can skip method lookups over AOT compiled languages.


> Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance

While both of these things are true, they not connected in the way you imply, since Java is a pretty low level language by today's standards.

In the Java school of business app engineering, writing the code is rarely a big part of the effort, so it doesn't matter if the langugae is not very good or expressive. Java wins at having a big commodity-like labour pool of programmers, and there's a lot of inertia and stability in the platform.

There are of course a lot of people who use more expressive and creative tools in making business apps, like eg the many companies using Clojure, Scala, Ruby, Python etc for them, so it's not the only way to skin the cat.


Kotlin's future is tied to Android.

On the JVM it is like trying to replace C on UNIX.


Both of my last 2 large bank gigs (kind of the last places you'd expect cutting edge tech) were going all in on Kotlin. New projects were Kotlin only, and there was active work on sunsetting/migrating Java applications towards Kotlin. None of these were Android applications.

Sure, this is anecdotal. But I'd say the same of Java's dominance in the JVM space. Java's continued dominance is not a sure thing from my vantage point.


JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.

Groovy was all the adoption rage across German JUGs back in 2010, then everyone was going to rewrite the JVM in Scala, or was it Clojure?

Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.

https://trends.google.com/trends/explore?q=%2Fm%2F07sbkfb,%2...


> JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.

This is less relevant today. The host blessed languages do have an advantage, but I would not say it is insurmountable. It might have been the case in the past, but the modern JVM is a platform, it is no longer a glorified Java language interpreter.

> Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.

Maybe. Maybe not. Most developers I talked to that have experienced the transition do not want to go back to Java.

This isn't to say Java will die. It will continue to thrive. But Java dominance (on the JVM or as a whole) isn't a sure thing anymore.


While Google's support was definitely a factor, Go also had some important language features going for it. Most importantly it is targeting a relatively empty niche in the programming language landscape, i.e. that of high performance, close to the metal languages with little performance overhead, while still being easy to write. "Easy to write" for 80% comes down to being memory-safe, unlike C/C++. If you're in that niche, you have few alternatives. The other options for mainstream memory-safe languages are interpreted scripting languages and Java-style jitted languages. Go easily beats both in resource consumption without being much harder to program in. Rust isn't really comparable because while it is memory-safe its memory management system still forces the programmer to think about the memory and resource usage, thus being slower to program in.


I have always used Pascal for that

But no one else does


Go is just shitty Pascal, change my mind.

On a more serious note. I decided to read Delphi documentation recently because I’m old enough to hear a lot about it, but not quite old enough to write anything in it. It had discriminated unions. It did! I can’t imagine my life without them, I write stuff exclusively in Ocaml-like languages, so the only question that I have in my mind is “how the hell we managed to go backwards?” It’s so weird.


Go has some restrictions, but all in all it's a great little pragmatic language, which solves a lot of practical problems (utf-8 strings, concurrency, garbage collection, cross compilation, single binary deployment, performance, readability) and which I can easily keep in my head as opposed to most of the other languages I've used in my career.

In that way it's like a Delphi for the modern world.


No, Free Pascal is Delphi for the modern world.


Of course it had. Turbo Pascal had, AFAICR. (Probably because, at a guess, Wirth-standard Pascal did; though I'm less certain about that.)


How is that different from eg. D? Or the myriad other GCd AOT compiled languages?


Rails was the first of that type of web framework. Django and all similar frameworks are Rails clones. At that time Rails was the killer app for Ruby. Unfortunately for Ruby the other languages were willing to put in the effort to copy it, and Rails is no longer a unique advantage.


Django and all similar frameworks are Rails clones.

Hmm, are you sure? I’m not saying you’re wrong, because I’m not sure myself! I heard of Django before I ever heard of Rails but that doesn’t mean much.

Wikipedia suggests they were released at around the same time -- Django started slightly earlier, Rails open sourced earlier.


Zope was the leader in python long before Django or rails. Since 1999

ColdFusion and JSP and PHP were dominant for a long time before Rails as well.


Aha, yes, I was getting mixed up between Zope and Django!


Funny. http://7393249866/ resolves to http://255.255.255.255/ in Opera (on linux), it appears to do saturating math for the overflow.


It works fine in my Opera 12.16 for Linux (amd64).


Same here:

    $ uname -a
    Linux GFMPC-056 3.11.2-1-ARCH #1 SMP PREEMPT Fri Sep 27 07:35:36 CEST 2013 x86_64 GNU/Linux


Every currency created since the advent of money 2,700 years ago has fit nicely into one of two classifications: Either it was a representative money system, deriving its worth from a link to some physical store of value like gold, silver or gemstones; or it was fiat [...]

And then the author suddenly stops and ignores the next obvious question: where do gold, silver and gemstones get their value from? The simple answer is because people like to have them and their supply is more or less fixed (which causes people to believe they won't suddenly lose their value). The exact same thing applies to bitcoins: bitcoin is the store of value.

The US already tried once to make a certain type of currency it couldn't control illegal: in 1933 it forbid anyone to own gold. That didn't make gold worthless any more than it will make bitcoins worthless. The only thing I believe that could do that to bitcoin is if it is replaced by another digital currency.


Having a look at the source, you get this for free in python. If you don't use any threads the overhead is just the checking of one (non-atomic) variable in the main eval loop, so that is at least as cheap as GOMAXPROCS=1 in Go. There is a small overhead of dropping/taking the gil around I/O functions, but it isn't worth optimizing the few nanoseconds that takes for the much slower i/o functions.

But if even this overhead is too much you can compile python without treading support.


Ok, the above is not really relevant, I misread the parent comment. Why the gil is not completely removed or made optional can be read at http://wiki.python.org/moin/GlobalInterpreterLock


The overhead of the GIL is probably insignificant for a single threaded python program, in part because the GIL is very coarse grained. Go uses (afaik) finer grained synchronization that therefore has more overhead. Also since python is slower anyway the relative cost of locking is much smaller than in the faster Go.


Implemented correctly (with using e.g. scrypt as the hashing component, and making sure the hashes are large enough so that the chances are neglegible of an attacker finding a match to a different hash than that was originally generated from the users password), this scheme would be no less secure than the traditional way of storing one scrypt hash per user.

The only effective difference would be that the entire database would become a single unit instead of a collection of separate hashes. Both an attacker and your webapp need to carry this extra weight of a monolithic blob of un-dividable data. It probably won't really slow down an attacker trying to brute force it if he has the data, but it may be more difficult to get the data in the first place.

But if an attacker has access to, say 10% of the hashes, he'll still be able to brute force 10% of the user accounts with weak passwords.

A different way to get a similar result of requiring a huge amount of data to be able to start cracking, would be to treat the database like a huge bloom filter: treat the database as a huge bit array of (say) a petabyte, hash the user's password with a hundred different hash functions (but with scrypt-like slowness), and use those 100 hashes as 100 indexes into the array to set the corresponding 100 bits. To verify a password, create those same 100 hashes and check if all 100 bits are set. Now, if an attacker has access to a part of the database, he won't be able to determine with certainty of any of his guesses at the user's passwords are correct.

Yet a third way to accomplish the same goal: pre-generate a petabyte of random data. To hash a user's password, apply a standard scrypt, then based on the resulting hash, generate a 100 pseudorandom offsets into the petabyte of data. At each of those 100 offsets, read a few (say, 16) bytes from our petabyte of random data, and finally store a hash of (scrypt_result + huge_data[offset1] + huge_data[offset2] + ... + huge_data[offset100]). You'd still have one hash per user, but to check a hash you also need access to a huge block of random data. The block of data functions in a way as an additional system-wide salt.

Anyway, there are more ways to get to a similar result as the OP's proposal. I'm not sure if it buys any additional security or if it's just more of a hassle for the webapp implementing this, but at least it's fun to think about.


AFAIK the Google cache is just that: a cache. If a page is deleted, the cached version will also expire after some time. For permanent record, you need the wayback machine, but that has much fewer sites in it than Google.


They tried that. What came out was Perl. Lots and lots of layers of syntactic sugar layered on syntactic sugar layered on more sugar, so you now have lots of sweet ways to do the same thing. But between all the sugar it's becoming harder and harder to see the real substance which it was about. The computer doesn't have any problem crunching through the sugar (it doesn't have any teeth to worry about), but as a programmer it doesn't become easier to recognize the vegetables if your cauliflower is sometimes covered in marshmallow, other times drenched in syrup, and the third time someone made a half hearted attempt to caramelize it. (Of course nobody servers the cauliflower as just plain cauliflower anymore.) So yes, you cook for the computer first, innocent bystanders don't need to know what went in to your program.

In the real world, the innocent bystanders matter, code is written as much for other people to understand as it is for computers to execute. I think that instead of plastering over all your content with sweeties, languages should be designed in such a way that the sugar is not necessary by choosing the right fundamental concepts, so programmers can understand what's going on. Learn to cook with the right ingredients, and learn when to add spices. And when not to.

For another example of a syntactic sugar friendly design, have a look at C macros.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: