Hacker Newsnew | past | comments | ask | show | jobs | submit | frobbin's commentslogin

Agree with them that it is very likely that animals have conscious experience, I believe it to most likely be the case as well.

But it seems irresponsible, and possibly self-serving for the NCC research crowd, to escalate this evidence to the level of proof on consciousness in animals. There is just no way to know what it is like to be any creature other than yourself. It seems reasonable to assume other humans with the same anatomy and physiology, with whom we can communicate extensively, are also likely conscious. But we just can't ever tell what the experience is like to be any other creature.

Signing a such a statement smacks of an attempt at bullying policy with scientific credentials. This is bad because then in other areas, such as global warming, it gives opponents with ulterior motives fodder for claiming scientists shouldn't be trusted since they are prone to the same irrational belief systems as other people.

It would be better to present their story for mainstream consumption with an attitude of Isn't this a compelling story? Maybe even educate people and attract people to the field of neuroscience in the process. But claiming they've figured it out and we all need to get on board will only have negative repercussions.


AI research, including speech recognition and machine vision, are currently ENGINEERING disciplines trying to make artifacts that do interesting things. Success is an artifact that works.

Several basic science disciplines are trying to understand how brains work. There is mostly tremendous amounts of experimental facts, difficult to put together, and some theory and modelling to go with it.

Norvig would be confused if he thinks that engineering AI systems automatically counts as models useful for understanding the brain. If there is application to understanding brains it is a welcome accident. It happens that there are signals in basal ganglia that look like the temporal difference error signal from reinforcement learning. So maybe RL research can help understand some brain circuitry in that case.

But in general the engineers are trying to get stuff to work, and they are deluded if they think they are simultaneously making progress in understanding how brains work.

EDIT:

For example: why does speech recognition use hidden markov models and N-gram language models? Because they're the best model of how brains understand speech? No! Not at all. HMMs and N-gram models are above all computationally tractable. Easy to implement, not too slow to run.

We have algorithms (such as baum-welch and N-gram smoothing techniques) to get them work work well in engineering applications. Nothing more. Might they help us understand brains? Maybe, but not at all necessarily so.


What if you got a 'startup investor license' with the same effort and cost as getting a real estate license? That way only the motivated who prove some basic knowledge can participate. A side industry of investor courses and test study materials would spring up. Even as someone trying to raise funding I would like to educate myself with such a course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: