Hacker Newsnew | past | comments | ask | show | jobs | submit | smoussa's commentslogin

I went on a zero news diet a while back and have recently got back on this diet again. It’s one of the best decisions I’ve made for my productivity.

For those with fear of missing out on current affairs — events are really only worth knowing about if someone has made you aware of it in real life. Otherwise it’s likely not that important.


You're on HNews?


Hey. Engineer at rebank (YC W19) here.

You should definitely check us out https://www.rebanknow.com/ if you're looking for an alternative. We cater specifically for small businesses and we're ready to help you switch over.


Funny enough you have Brex as a client!


Add eBay to the second list. We are still uploading XML documents.


Ebay does have a "RESTful" API: https://developer.ebay.com/api-docs/static/ebay-rest-landing...

Definitely agree that the old SOAP API was/is horrific. But the XML is about as easy to work with as JSON/REST.


I second that, eBay API is hot garbage and their staging is years out of date.


And Amazon sellers API (Amazon MWS). Rarely seen such an inconsistent mess of often poorly documented SOAP-ish calls, that so many external partners base their livelyhood on.


Some APIs you can choose to use. Other APIs you have to use. The ones you have to use tend to not care much about the developer experience... Because why should they? No one's going to avoid Amazon because their API was annoying. People need to make money.

This actually explains a lot of other things. Like the DMV.


The API is not not SOAPish, it just uses XML. They probably choose it so that they can use XML Schema to check incoming data (which is exactly the right decision here).


eBay is an absolute nightmare. They have a demo instance to test on and it's about 4 years out of date and half of it doesn't work anymore.


For those interested, the demo instance is at https://sandbox.ebay.com/ The message at the top says "eBay and PayPal will be separate companies soon", even though they became seperate companies 4 years ago.


Wow, this is awesome.

Mousing over the links there to see where they go, the first interesting-looking one was the supported/unsupported features list, over at https://ebaydts.com/eBayKBDetails?KBid=684. Ah, I see - a wall of text reminscent of "system requirements", except it's a list of things you don't get instead.

But then going up to https://ebaydts.com, I met with... a blank page. But it has a title, so something's blown up. I hit F12 half expecting to be met with some kind of implosion, and even then I was cynically amused: the page has `<body style="display:none;>`, with no closing quote, and while Chrome parses it properly the devtools don't, so the CSS editor leaks bits of HTML.

On the one hand my infosec-sense says there's probably some awesome stuff hiding in here... but on the other hand, between the 1000 feet of bureaucracy I'd have to all but drown to report _anything_ I found, and the tenterhooks I'd be on while poking around establishing there's anything there beyond plausible deniability... not worth it?


Okay, it's not just me (demo instance). Ebay's APIs and their SDK documentation (or lack thereof) are making me re-think my app's monetization model. Docs are shockingly bad, full of screenshots that look like ebay from the early 2000s.


I was working with it 3 years ago and it sounds like it hasn't gotten any better in that time.


I submitted a SAR to eBay and they responded by posting me a copy of my data on a USB stick!


Did you image it to check for deleted data in unallocated space?


I’m working with Numba’s CUDA API and it works well as a drop in replacement for embarrassingly parallel functions.


I've done a fair bit of C++11 for CUDA and I was so happy to throw everything out and switch to Numba. It has some rough edges (like incomprehensible error messages when the type inference goes wrong) but it's been a pleasure overall to work with.


I've done a fair bit of Numba CUDA and I was so happy to throw everything out and switch to C++.

NubaCUDA gave me lots of small problems and a few big ones. The poor support for debug/perf tools and poor integration with other high-level python CUDA code (FFTs in particular) sent me packing, but the number of small problems was excessive in comparison to the size of my code. I had 5 reduced bugs at the bottom of my notebook and two paragraphs of "baggage" at the top to support a tiny little 50LoC kernel: one paragraph for the environment variables and one for patching nubacuda itself for a trivial API incompatibility that hadn't been fixed for the better part of a year. All of this for a tool that provided a diminutive subset of functionality at the intersection of both python and C. I've felt more computational freedom writing BASIC on my TI-83.

CuPy could well have changed that equation!

> incomprehensible error messages when the type inference goes wrong

NumbaCUDA is truly the galaxy-brain of type checking: first it complains loudly so as to force you to provide type information, then it opts to not complain about a mismatch, and then it silently reinterpret_casts a double* to float* behind your back.

I know it's free software and I have no right to complain, but I sure sunk a lot of time into this dead end and regret it.

Spiffy icon though.


What’s the difference of NuMBA CUDA and Pytorch or similar?

If you’re doing custom kernels you should take a look at the Julia library CuArray [1] and generic kernels [2]. I really like that I don’t have to dig into C++ and deal with all of the memory and kernel management.

1: https://github.com/JuliaGPU/CuArrays.jl 2: http://mikeinnes.github.io/2017/08/24/cudanative.html


My impression was that pytorch focused on linear algebra / deep learning. The reason I was playing with numbacuda in the first place was because part of my problem did not fit nicely into a (dense) linear algebra framework, so numbacuda's custom kernel support seemed attractive. Does pytorch have a good low-level kernel library? Or sparse linear algebra library?

I love Julia, but I haven't managed to convert anyone else on my team and I already spent my informal exploration budget for the GPU project on nubacuda, so JuliaGPU will have to wait for another time. I'll be sure to keep it in mind, though!

How is the CUDA debug/perf story with Julia? Does it play nice with the nvidia tooling?


Ah, that makes sense. I've only dabbled a little with DNN's recently, but pytorh/tensorflow seemed very targeted toward deep learning. Generic tools seem more useful to me. What are you doing with fft's?

I haven't dug too deep with CudaNative / Cuarray to understand the state of Julia perf debugging. Though here's one post on the topic:

https://discourse.julialang.org/t/cudanative-is-awesome/1786...

In general It's been very pleasant experimenting with gpu programming in Julia. I couldn't quite grok tensorflow code, and it's cool to just declare a Julia array and send it the GPU.


Great stuff. Look into Fourier transformation / inverse Fourier transformations and clipping to remove the crackling sound.


Can you elaborate? Why would a transform be needed only on mobile devices? Why wouldn't the OS handle whatever's needed to produce audio in a platform-agnostic way, especially on the web? Very interested to know more.


I suspect the commenter meant that when the buffer runs empty because the device can't compute new samples quickly enough, the last seen power spectrum (power versus frequency) is briefly maintained in the output. This filler is computed by a cheaper process.


The crackling is probably due to missed buffers. Additional processing of the sound would likely only serve to exacerbate the problem.


I think it’s important to understand what our objective is.

Reading these comments comparing birds and aeroplanes is nonsense since they both have very different flight behaviours and objectives. Birds’ wings flap for agility to avoid predators. They have brains which is half reflex and allows them to regulate their bodies. Planes don’t have predators and don’t need cognition.

If our objective is to solve a business problem, machine learning is great for specific tasks and can achieve superhuman results in some cases. We don’t need much neuroscience here.

But if our objective is AGI, it gets interesting because it is very far from current machine learning / deep learning / reinforcement learning. It’s hard to put a definition on AGI at all. What do we want to achieve? To replicate the human brain, of course we need neuroscience. To replicate intelligence without designing the components for bodily function will need an approach which looks at brain circuitry and function but is implemented with a good level of abstraction.

I believe we know a lot more about the brain than the public thinks. Read Cell Neuron and Nature Neuroscience journals and clinical encyclopaedias to get an understanding. I don’t think we should be replicating things on the neuron level but at a more abstract level of neuronal dynamics, neuronal populations and networks with a focus on understanding the developmental biology of the first few years of human life where learning really happens.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: