[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/cyber/ - Cyberpunk & Science Fiction

A board dedicated to all things cyberpunk (and all other futuristic science fiction)
Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Flag
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, swf, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


“Your existence is a momentary lapse of reason.”

File: abd06bed605b5cd⋯.jpg (18.5 KB,490x490,1:1,cyber_mind.jpg)

 No.57858

It is my belief that the current state of A.I. research and development is flawed.

A.I. research, which currently is composed almost solely of neural networks, seeks to

replicate part of the functions of a human as closely as possible to the real thing.

One popular example is computer vision, where neural networks are trained to recognize

various objects. These are used in self driving systems to allow the car to recognize

stop signs, pedestrians, etc. Almost all of these neural networks share the same basic

properties. They are given a set of inputs that activate the neural network, and they

deliver a set of outputs which marks the end of the neural network. In this way,

modern neural networks are almost exactly like conventional functions in programming

languages.

The current theory in A.I. research is that the continual development of the function

like neural networks will eventually lead to true intelligence. If a computer gets so

good at recognizing images it must at some point develop understanding about those

images, right? I posit that this way of thinking is wrong. Functions can be compared

to logical circuits, and in much the same way, the continued development of a single

logical circuit will never properly lead to a computer. One can not take a simple

addition circuit and make it add so well it becomes a computer, or at least if one

could it would take an obscene amount of time to accomplish. In order for A.I. research

to be driven to the next stage, we need to stop thinking about neural networks as

single functions and think more about how the combination of these can develop a

computer.

There are a few improvements that I think could be a start to this change in thought.

First, a neurological computer must be based on a self feedback loop. Input and output

must be indistinguishable. A brain works constantly, 24 hours a day, 365 days a year.

There is no stopping point to the calculations of a brain. It follows as well that the

feedback mechanism must be coded into the loop and work at runtime. It must change as

it runs and evolve as it exists.

Second, individual neural networks must move away from replication of external biological

functions, and they must now focus on learning and evolving mechanisms. Theoretically, if

a proper neurological computer is made, it should be able to create it's own neural nets

for vision, language, even sound. The most important aspect to replicate must be versatile

learning. A human can lose a limb, or their sight, or their hearing and the brain will

adapt. Brains are used by all animals and yet a spider is so much different from a human.

To create A.G.I., versatility must be the constant. IO can be learned later.

Third, two dimensional layers will no longer work as bases for networks. Networks

must transcend to the third dimension. Neurons may even have to be used within multiple

networks at the same time. This also means parallel processing might be the key to this

new network. As such it might be very helpful to use some sort of functional programming

language in the development of it.

In conclusion, A.I. R&D must have a paradigm shift. This is not to say that A.I. as it

is currently is bad. Any task that doesn't need intelligence shouldn't use it, but if

we want to achieve true A.I., we must evolve too.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.57901

>>57858

Bump. I've had almost identical thoughts myself. Glad to see I'm not the only one.>>57858

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.57902

YouTube embed. Click thumbnail to play.

I agree with a lot of what you've said, but there's still a long way to go with the traditional approach

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.57935

>>57858

>First, a neurological computer must be based on a self feedback loop

yes

>Second

it must recognize there is an issue and pick from a given set of logical elements in order to assemble a new tool-path to reach a similar goal as was once possible before the issue appeared, so an extra awareness is required; it must be built to test multiple combinations at extra high speeds and compare the efficiency ratios and then progressively filter out the useless combinations to keep only a few and then create a hierarchy of preferable combinations…

>parallel processing

there likely something like that if only through quantum computing and multiplicity of combinations and trials, besides the advantage of some form of multi-threading for some global supervision of certain processes advancing faster or slower and needing to exchange time-output with others; so organization becomes quickly needed and then a proper chain of command between regulators soon appears crucial…

>shift

it must contain a new level of self-refactorization too

where form and efficiency are combined

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58014

File: 156321bf74aa821⋯.png (Spoiler Image,348.84 KB,454x537,454:537,human_cyber_pub.png)

The coscious phenomenas are just emergency phenomenas with qualia.

Take in mind the embodied cognition's theory.

In my opinion the central question it's just the meaning of "intelligence" we have in mind.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58039

>>57858

Ok, I think this is where you're confused. You say:

>The current theory in A.I. research is that the continual development of the function like neural networks will eventually lead to true intelligence.

This is true. But then you miss a key step when you say:

>If a computer gets so good at recognizing images it must at some point develop understanding about those images, right?

This is wrong. Continual development of neural networks does not mean people expect a computer to get so good at recognizing things that it magically gains consciousness. It means that we expect neural network REASERCH to get so good (first, by trying to get it to recognize images) to one day then learn how to train a super advanced neural network to understand, well reality, basically.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58072

unfucking board

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58097

A baby doesn't need to look at a picture of a car ten gazillion times to get what a car is. I don't even know why the AI industry these days even works as it does.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58109

>>58097

How many gazillion times does a baby need? do u have a citation?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58115

Current AI research is very much based on utility. A neural network is trained for specific purposes, such as object recognition. General intelligence is not required for such mundane tasks. A more holistic attitude to AI, where an entire system is an intellect, is obviously a much more complex goal. Interesting post, nonetheless. I believe that organic chemistry will play a part in our future artificial intelligences, which will likely create ethical concerns.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.58139

Of course neural networks are not real intelligence, they could tweak themselves to emulate conscious behavior, but that's not the same as being conscious, neural networks will never be actually, they are just algorithms, they can't think at all.

We should focus on generating a real consciousness that learns and understands, that would be intelligence.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]