[ / / / / / / / / / / / / / ] [ dir / agatha2 / arepa / d / doomer / eris / fur / sw / vg ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): c2f8e61ac374082⋯.png (647.52 KB, 667x670, 667:670, seriously?.png) (h) (u)

[–]

 No.1004763>>1004771 >>1005874 >>1005878 >>1008907 >>1008915 [Watch Thread][Show All Posts]

>start reading a scientific paper on neural network that can crack captchas

<"Hmm, always wanted to learn more about neural networks and this project might give me enough of a push to do so"

<<"SEEMS EASY ENOUGH"

>make dataset by modifying open source of some old (but still widely used) captcha

>little fidgety at start but starting to get a hang of things

>start implementing hyperparameters

>building layers of said neural net

>finally coming to training phase

<"Error: You can't combine Convolutional and LSTM networks, you dummy

am I a brainlet or is it that people have started lying on most NN science papers because others won't call them out and test their theory

 No.1004771

>>1004763 (OP)

<rants about paper

<does not link to it


 No.1004790

Brainlet.


 No.1004799

>am I a brainlet or is it that people have started lying on most NN science papers because others won't call them out and test their theory

Most papers are not replicated, but people do get called out on it every once in a while. Not counting the most bullshit papers possible in fake conferences nobody would lie by saying something impossible isn't, that kind of shit is obvious to anyone in the field and it'd be caught during peer review.


 No.1004910

they intentionally make it difficult to reproduce the results in NN papers, sometimes downright lying in the paper to throw you off. you're not supposed to get it for free.


 No.1005874>>1005922

>>1004763 (OP)

Shouldn't they post source code for programs they written during research?


 No.1005878>>1005925 >>1005928

>>1004763 (OP)

I'm retarded and I have a question about neural networks and machine learning. Suppose I want to make a piece of software that recognizes speech and translates it to text. Will I have to distribute sample database together with it?


 No.1005894>>1005927

<"Error: You can't combine Convolutional and LSTM networks, you dummy

What error? Might be your ML library that just doesn't know how to unravel the RNN in combination with another NN.


 No.1005922

>>1005874

They did link github but "its not available, check soon"

Horseshit tbh


 No.1005925>>1006025

>>1005878

Nah, you just have to train NN and use it as is.

Although there are NNs that are efficient in this they are pretty new and money making machines that people aren't willing to share compared to already studied and tested methods of markov chains

So look up speech recognition using markov chains tbh.


 No.1005927

>>1005894

I used Matlab tbh, so that might be it


 No.1005928>>1006025

>>1005878

But in honest, most services that use NNs today don't distribute them but just use API calls to their servers


 No.1005933

>training neural nets with ai enabled whatchamajiggers

pajeet tier 'next generation computer programming' for brainlets


 No.1006025

>>1005925

>>1005928

That's why I was wondering. Is it possible to make such stuff work offline?


 No.1008899>>1008900 >>1008958

File (hide): 15e437a3d305741⋯.png (527.72 KB, 435x667, 15:23, 1503617769922.png) (h) (u)


 No.1008900

>>1008899

didn't you know


 No.1008907

>>1004763 (OP)

What's your net's architecture?

You can try some sort of embedding, i.e. letting the CNN output be the RNN's input, or connecting multiple RNNs in parallel then convolving the inputs and/or outputs to them


 No.1008915

>>1004763 (OP)

>You can't combine Convolutional and LSTM networks

You can, but your faggot library doesn't know how to do it.


 No.1008916

Pretty sure I've seen at least the abstract of the paper, but I'm not going to help because OP didn't link it.


 No.1008923>>1008959 >>1009043

most of the NN bullshit are just hype train.

remember that robot that wants to kill all humanity? meant as a joke but it's just for attention. the guy was bullshitting and would violate the law of robotics so he should be jailed though it's just an entertainment shit.

NN is just hype bandwagon and only works if you have like IBM and intel supplying you with all the needed computing power then in the end it's really just for show and you're only doing it to advertise those companies to fuel more hype until you've made the geek version of the cheap gaming community.

AI tree diagrams is the future.

also, you should be doing tesseract stuffs.


 No.1008958

>>1008899

Is it possible to make a DNN for detecting Jews?


 No.1008959

>>1008923

The future is utilizing the matrix multiplication potential of learning in combination with a symbolic representation created by humans.

Compiling code into a RNN and then further training the RNN, and then decompiling it back to a human readable representation.


 No.1009043

>>1008923

>violate the law of robotics

>should be jailed

????????????


 No.1009056

Its not magic

Modern artificial intelligence is just simple subtraction

same way that computers are a series of on off switches




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 4
23 replies | 1 images | Page ???
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / agatha2 / arepa / d / doomer / eris / fur / sw / vg ][ watchlist ]