I see this meme all the time. What does GCP lag on? What is it missing?
I see a proliferation of AWS services, yes, but many seem to be replicating things that GCP has had since the start, in multiple different formats without clear direction. Bigquery covers most of the use cases of Redshift, Kineseis, Athena, EMR, and others. It is at once decoupled from and well integrated with their object storage, allowing reads and writes with clear translation when necessary. I find myself reaching for Bigquery like semantics all the time and having to cobble two or more AWS tools to get similar. I see the same pattern repeated constantly - If I click "ECS" in the Amazon Console, it wants to sell me ECS, ECR, Fargate, and EKS - All of which overlap in non-obvious ways.
Disclaimer: I'm Ex-google, and was a poweruser of dremel
I used GCP before and liked it better than AWS. The current startup I work for has a prior business relationship with AWS, and every month, it gets harder to move off to GCP. AWS gave a generous credit to the type of startup we have, and with the relationship-building from the solutions architect and account manager, this probably won't go away.
There are a lot of technical refinements in GCP that makes life a lot easier than AWS, but nothing strategically compelling enough for the cost (in cash, developer time, and burning relationships) to jump ship. So I make-do with the not-as-great technology. They are pain-points but not deal-breakers.
I think if our startup were doing AI/ML, or even heavy data processing as our core technology, there's probably enough compelling reasons to invest in a move to GCP. But it isn't.
If I were greenfielding a new launch at a new startup, I'd push for GCP.
I've seen this several times by now. It's pretty ridiculous. One of my clients was an AI startup. They got $100K in AWS credit. So yeah, of course they aren't going to move off AWS until that credit runs out, even though GCP's offering for GPU is much better: for one thing on GCP you can vary the number of cores allocated to a machine with a GPU (or several). Inexplicably on AWS you can't do that. One GPU - you get 8 cores. Why? Because fuck you, that's why. As a result, you can't use that V100 fully if you do any significant data augmentation - CPU is often a bottleneck. IO and networking are faster on GCP as well. And if your GPU is not "free", you can cut costs by running on preemptible instances (few of their training runs are longer than 8 hours). On top of that if scale is needed (which it was), you can wrap it all into k8s, too.
Disclosure: ex-googler (left 4 years ago), own no GOOG, use GCP to run my own business. Clients use whatever they want.
Totally agreed. Many places don’t even run their AWS environments particularly well, but talk themselves into a most of their own making.
AWS is just another IT vendor. The weird dependency, reminiscent of 1980s IBM thinking, is bad for any company, at some point you become more of a vassal than a customer.
My understanding is that GCP lags on developer trust, specifically when it comes to support. Several years ago I was excited about GCP, but have mostly lost interest after reading various stories about account lockouts here on HN.
This is accurate form my perspective. I work for a company that made a big bet on Google Cloud and we spent the last 4 years building on it. We are now moving to Azure because of the number of times Google kept dropping support for things forcing us to rewrite our libraries. We should have never went with PaaS (App Engine) but thats a different issue all together. App Engine Flex was a nightmare to work with because Google couldn't help themselves with the constant urge to change things by ripping out things and replacing them instead of having a vision and improving on the existing offerings.
This. Unless you are a big, important account, do not put anything you value on any of Google's platforms. There is a small, but non-zero, chance they'll randomly destroy it and ghost you.
I was a user of GCP. Documentation lied to me back in January and cost me over $1000 of my personal money. I got assigned a support case, and had absolutely no reply until April, when I had confirmation that there was indeed a bug in the system from Google engineers, and that they planned to fix it. In the meantime, I switched off of GCP. Their only consolation was a coupon that insisted that I keep on using GCP, which would have eaten through my money again.
I somehow got CC'd to an internal Google system, Buganizer, which has done nothing but leak a bunch of internal communications, including some small code patches to GCP infrastructure itself.
My support request has not been updated, but Buganizer has let me know that they supposedly updated the documentation mid-November to fix it, but the rephrased advice still was not correct based on my interpretation of the issue. The bug is still open.
I think that GCP is obviously better for people working inside Google. I was amazed when on my third day I took the all day end to end class and was blown away by how “not difficult” it was to use Borg, the global file system, web based dev tools, etc.
But, I have always enjoyed using public GCP more that AWS.
I see a proliferation of AWS services, yes, but many seem to be replicating things that GCP has had since the start, in multiple different formats without clear direction. Bigquery covers most of the use cases of Redshift, Kineseis, Athena, EMR, and others. It is at once decoupled from and well integrated with their object storage, allowing reads and writes with clear translation when necessary. I find myself reaching for Bigquery like semantics all the time and having to cobble two or more AWS tools to get similar. I see the same pattern repeated constantly - If I click "ECS" in the Amazon Console, it wants to sell me ECS, ECR, Fargate, and EKS - All of which overlap in non-obvious ways.
Disclaimer: I'm Ex-google, and was a poweruser of dremel