FROM:https://news.ycombinator.com/item?id=9613810 thisisdave 7 days ago
I had assumed that meant he wouldn't write for them either (and thus wouldn't enlist other people to volunteer as reviewers when the final product would cost $32 to read). [1] https://plus.google.com/+YannLeCunPhD/posts/WStiQ38Hioy
|
|
|
|
paulsutter 6 days ago
I have to admit, when I saw "LeCun, Hinton, in Nature" I thought "That must be an important article, I need to read it". I haven't read every single paper by LeCun or Hinton. The Nature name affected me. That's why it's rational to publish there. There's still no effective alternative to the journal system to identify what papers are important to read. There have been attempts, Google Scholar and Citeseer for example. A voting system like HackerNews wouldn't work, because Geoff Hinton's vote should count for a lot more than my vote. Pagerank solved that problem for web pages (a link from Yahoo indicates more value than a link from my blog). How can scientific publication move to such a system?
|
|
|
|
hackuser 6 days ago
Complete amatuer speculation: Scientists' professional societies could create elite, free, online journals, with a limited number of articles per month (to ensure only the best are published there), openly stating that they intend these to be the new elite journals in their respective fields.
|
|
|
|
aheilbut 6 days ago
Hypothetically, AAAS is in the best position to pull something like this off, but as the publishers of Science, they're sadly very committed to preserving the status quo...
|
|
|
|
|
grayclhn 6 days ago
[1]: http://www.imstat.org/publications/eaccess.htm
|
|
|
|
apl 6 days ago
So there's hope!
|
|
|
|
|
|
grayclhn 6 days ago
So, going by the link, I don't think this is a change in his position on open access, but I also don't think that his position involved as much self-sacrifice as you'd assumed. edit: I don't know the field well, but these review articles usually recycle a lot of material from review articles the author wrote a year or two before. The contents of the article might be basically available for free already.
|
|
|
|
thisisdave 6 days ago
I was referring more specifically to the fact that Nature had to enlist volunteers to referee the paper he wrote. I was curious whether he was okay with that even though he wouldn't volunteer himself, if his position on non-open journals had changed, or if there was some other explanation.
|
|
|
|
|
|
|
|
chriskanan 7 days ago
It doesn't allow you to print or to save the article.
|
|
|
|
|
|
robotresearcher 7 days ago
|
|
|
|
|
IanCal 6 days ago
http://www.nature.com/news/open-access-the-true-cost-of-scie...
|
|
|
|
joelthelion 6 days ago
The work that counts, ie. the research and the peer-review, are free and not compensated at all by the publisher.
|
|
|
|
IanCal 6 days ago
> The work that counts, ie. the research and the peer-review, are free and not compensated at all by the publisher. The peer review is done for free, yes. I don't think I'd class the research as free though, unless you mean free to the publisher, the scientists are still generally paid. Again, I'm not arguing for paid journals, just pointing out that they do have costs to run. [0] http://lib.hzau.edu.cn/xxfw/SCIzx/Document/249/Image/2010101...
|
|
|
|
|
|
|
|
|
|
|
paulsutter 6 days ago
A few quotes: "This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion" "The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning." "Problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other ‘shallow’ classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category.... The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning." "Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance"
|
|
|
|
deepnet 6 days ago
http://colah.github.io/ Very very visually insightful on the nature of Neural Nets, Convnets, Deep Nets...
|
|
|
|
|
|
itistoday2 6 days ago
https://en.wikipedia.org/wiki/Hierarchical_temporal_memory#D...
|
|
|
|
paulsutter 6 days ago
But actually, RNNs are great for recognizing and predicting temporal sequences (as we saw in the Karpathy post), RNNs use a sparse representation, and RNNs can be extended with hierarchical memory [1] The big difference is that the neural network crowd are getting some spectacular results, and Numenta, well, maybe they'll be show more progress in the future. Jeff Hawkins is super smart and a good guy, and he might get more done if he acknowledged the commonalities in the approaches rather than having to invent it all separately at Numenta. I really dont mean to be critical. Jeff inspired my own interest in machine intelligence. [1] page 442, "Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a ‘tape-like’ memory that the RNN can choose to read from or write to, and memory networks, in which a regular network is augmented by a kind of associative memory. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions."
|
|
|
|
bra-ket 6 days ago
But there is some cross-pollination, see the recent projects by Stan Franklin's lab on Sparse distributed memory and composite representations, it's a step towards integration with deep learning: http://ccrg.cs.memphis.edu/assets/papers/theses-dissertation... On the other hand check out the work by Volodymyr Mnih from DeepMind https://www.cs.toronto.edu/~vmnih/, reinforcement learning with "visual attention" is a step towards consciousness models of the HTM/SDM/LIDA camp.
|
|
|
|
paulsutter 6 days ago
Yes, reinforcement learning is the path to general intelligence, and the deep learning community is showing impressive progress on that front as well. The Deepmind demo [1] and the recent robotics work at Berkeley[2] are good examples. Thanks for the link to Stan Franklin's work. I'm glad to hear there is work to integrate the two approaches. [1] https://www.youtube.com/watch?v=EfGD2qveGdQ [2] http://newscenter.berkeley.edu/2015/05/21/deep-learning-robo...
|
|
|
|
|
|
dwf 6 days ago
Even if they had invented some believably interesting task and done fair comparisons with other methods and shown that HTM succeeds where others fail, it would be considered worth a look by the wider machine learning community.
|
|
|
|
|
Teodolfo 6 days ago
Edit: To clarify, research papers generally cite other peer-reviewed research papers in similar venues preferentially. ML papers should mostly be citing ML papers in high-quality, peer-reviewed venues. HTM doesn't have papers like this to cite.
|
|
|