Disucssing the simulation of the most primitive brain

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

viziionary

New Member
7+ Year Member
Joined
Mar 2, 2016
Messages
3
Reaction score
0
Hi, I'm actually not a doctor, I'm just what you'd call an 'enthusiast' of science, engineering, etc, and one of my interests is artificial intelligence, and so I'm looking to discuss neurology.

I asked this question some time ago: http://biology.stackexchange.com/questions/27866/what-were-the-first-neural-systems-like/41403#41403 and I'd like to have a discussion on the subject of simulating the evolutionary process of the most primitive brain.

Ive been experimenting, as a neurology and programming enthusiast, with artificial intelligence. Since we don't understand enough about the brain to model one as a program, my goal is to see how far I can get with a systematic attempt at evolving a working brain from nothing by simulating millions of years of evolution in the controlled environment of a computer program.

My strategy has been to, with Object Oriented programming, model an object (in other words, a collection of data and functions) which behaves as closely to a neuron as possible, lets call it a vNeuron. I then generate 10,000+ duplicates of this vNeuron and connect them randomly.

The next step is to give them a task to accomplish, and generate tens of millions of these random configurations (virtual mini brains or vBrains) which will each die if it cant accomplish the task, and if it can, it will (essentially reproduce) act as a new marker for future random configurations (mutations), except once the first one accomplishes the task, rather than random configurations, its children will only be slightly mutated.

So I have 2 problems:

1. I'm not sure what to use as this "task" to be accomplished. I realize it should involve sensory input and a specific pattern of sensory output necessary in order to survive the challenge. But I'm not precisely sure how that should be arranged, how these virtual neurons should interface with the input and output mechanisms, and the task to be accomplished. Im also not sure how to design a task that will allow future progress to be tested as matches are found that can complete it - in other words, it would need to be not only completed, but completed with a level of success rather than just pass or fail, to leave room for improvement in child generations.

2. Throwing together tens of thousands of vNeurons randomly configured in the first phase seems like too much of a shot in the dark. It seems like I would have a higher chance of some form of success if I started with lower numbers of neurons and simpler tasks. Its not as if a primitive organism suddenly mutated to have 10,000 neurons that just happened to be arranged in a way that resulted in successful output. I assume at first there were only a couple of very primitive neurons in the system, and from there, 10 neurons, then 100, and so on. However, Im not sure how I can go about the challenge of modeling such a small arrangement of neurons to accomplish a task.

So if anyone has thoughts or ideas about this, feel free to reply. Thanks.
 
Look up artificial neural networks/back propagation. The early break through applications were handwriting and facial recognition, and it's probably kind of sort of similar to how the brain handles/classifies sensory input. If you've built an artificial neuron, you can use that as the nonlinearity at each node/neuron/processing element (as opposed to a sigmoid or tanH) provided it is a smooth function. You can build a hierarchical feedforward network capable of doing some relatively sophisticated tasks on a regular laptop, but if you want to build a huge network with complicated architecture, it's going to get computationally expensive pretty quickly. The field of neural networks is pretty wide, and you can do things to try to theoretically approximate specific biological systems (use a convolutional net, use a neural model as a nonlinearity, etc) but be wary that just because something is called a computational neuron or neural network, does not necessarily mean it is a reasonable representation of a biological system. Computational neuroscience is a popular field, and it may be helpful for you to see what others have done. Izhikevich has some landmark papers and he focused on network level simulations. Dayan Abbott is the classic intro textbook. Have fun.


Also to be clear about problem #1, what you are describing is the general field of supervised machine learning. Neural nets are just one algorithm (see also linear regression, logistic regression, Bayesian Learning, and support vector machines).
 
Last edited:
Look up artificial neural networks/back propagation. The early break through applications were handwriting and facial recognition, and it's probably kind of sort of similar to how the brain handles/classifies sensory input. If you've built an artificial neuron, you can use that as the nonlinearity at each node/neuron/processing element (as opposed to a sigmoid or tanH) provided it is a smooth function. You can build a hierarchical feedforward network capable of doing some relatively sophisticated tasks on a regular laptop, but if you want to build a huge network with complicated architecture, it's going to get computationally expensive pretty quickly. The field of neural networks is pretty wide, and you can do things to try to theoretically approximate specific biological systems (use a convolutional net, use a neural model as a nonlinearity, etc) but be wary that just because something is called a computational neuron or neural network, does not necessarily mean it is a reasonable representation of a biological system. Computational neuroscience is a popular field, and it may be helpful for you to see what others have done. Izhikevich has some landmark papers and he focused on network level simulations. Dayan Abbott is the classic intro textbook. Have fun.

I realize an actual, non optimized, neural network of billions of neurons would be incredibly expensive, but Im not looking for such a solution in this decade. I think most people pursuing artificial intelligence in the present time are utilizing very optimized, "artifical" methods, but I'm interested in a more natural, "true brain" approach, where one uses the process of evolution, to evolve a brain that gains happiness from helping a specific person. I think this is an approach that has been dismissed in the AI field, for optimization issues, but I think it would lead to a more powerful and diverse AI if one could succeed during the evolution phase of experimentation. This approach, while it would require drastically slowed simulation speed on current computers, will be viable for real time usage in a decade when computing power is exponentially higher.

So if you know of book and published papers that focus on this approach, or something similar, please clarify that. It's not that I think I'm qualified to design AI better than the field's leading experts, I just think evolution is more qualified than any human engineer. Evolution has hundreds of millions of years of experience under it's belt, and it's released some successful products.
 
Last edited:
Most AI applications code some form of evolution. That is the "learning" aspect of machine learning. When you code any such algorithm, the point is you don't know what the solution will be ahead of time. You give the machine some input and sample output, and tell the computer to go figure it out. "Happiness" is the error between the algorithms output and the sample output, and the program iterates through (evolves) to minimize error (and maximize happiness of the coder, because it would be really nice if my program converged to a solution that does what I want it to do with reasonable accuracy). If you have a ton of data and computational power, you can be quite liberal in terms of the potential solution space.

Any machine learning textbook or course will cover all this and more. For intro level try Andrew Ng's course on coursera for practical implementation (or his youtube lectures for something a bit more mathematically rigorous).
 
Last edited:
Most (if not all) AI applications code some form of evolution. That is the "learning" aspect of machine learning. When you code any such algorithm, the point is you don't know what the solution will be ahead of time. You give the machine some input and sample output, and tell the computer to go figure it out. "Happiness" is the error between the algorithms output and the sample output, and the program iterates through (evolves) to minimize error (and maximize happiness of the coder, because it would be really nice if my program converged to a solution that does what I want it to do with reasonable accuracy). If you have a ton of data and computational power, you can be quite liberal in terms of the potential solution space.

Any machine learning textbook or course will cover all this and more. For intro level try Andrew Ng's course on coursera for practical implementation (or his youtube lectures for something a bit more mathematically rigorous).

Ok I see. Thanks! I'll get reading.
 
NP, it's an exciting field and after you learn the basic tools you will be able to do a lot of fun things. One final note, you mentioned OOP in your first post. For the fields of machine learning and computational neuroscience, MATLAB /OCTAVE (the free but slightly less user friendly version of MATLAB) are your friends, and compared to most environments/languages they are not that hard to learn. Python is also becoming more popular. The coursera course I referenced actually gives a very quick review of useful Matlab functions.
 
Last edited:
Top