Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Work in Progress / Artificial Neural Net Engine (ANNE)

Author
Message
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 13th Jul 2007 08:34
Welcome to ANNE - the Artificial Neural Net Engine - WIP thread. The goal of ANNE is to provide the foundational functions for a learning and evolving neural net engine. The functions can then be used like any code in any project.

First, let me cover how an Artificial Neuron works. There are some basic components that make up each Neuron. The neuron learns and evolves by changing these components. Of course, we want to make these changes in a logical manner so as to speed up the process. But that is getting ahead. Take a look at the following diagram.



This is a Standard Neuron with the following components:
>Resistor
>Sigmoidal Curve
>"Capacitor" (holds the charge)
>>Threshold (point at which the charge is released)
>Dissipation
>Output Charge

The resistor reduces each incoming charge by a given factor. This way, a neuron can either allow in the full impact of a charge (division by 1) or reduce the charge. This is useful if there are multiple neurons providing input to a single neuron.

Let me take a step back here; when talking about a "charge", as far as an artificial neuron is concerned, this is simply a numeric value that is being manipulated, stored, or moved.

The sigmoidal curve helps to separate and standardize an incoming charge. The sigmoidal curve provides an inverted standard distribution between 0 and 1 with the low point at 0.5. This means most charges will either be converted to near 0 or near 1 and tend to stay away from the .5 mid range. So each incoming charge will likely add a lot or add very little charge to the neuron.

The Capacitor stores the charge. This just simply stores the number for future use. The charge continues to build up until a set threshold is reached. Once a neuron releases a charge, the capacitor is emptied to start building up a new charge.

Dissipation is the rate at which the charge in the capacitor is reduced. This helps prevent neurons from always firing at some point or another.

Output charge is the value that the neuron puts out when it fires. When the charge in the neuron's capacitor reaches the threshold value, the charge value is sent to four other neurons, and the process starts over with those new neurons.

By manipulating the above neuron components, each neuron can provide a vast amount of data via it's single output charge.

So far, we have input and output of charge (or numerical values). But what good does that do us? Not much unless there is an action at the end of a neural "path". This is where a Terminal Neuron comes in. Terminal Neurons do not pass data on to other Neurons. Instead, Terminal Neurons activate functions. Take a look at the following picture:



As you can see, the Neuron is almost identical to a standard neuron. However, in place of the Output Charges, the Terminal Neuron activates a function. This can be anything we can write a function for. For example:



Pretty cool! Now we can code a Neural Network to control an object to move around. But, you are probably asking, how does the Neural Net learn or evolve?

Two things have to happen for the Neural Net to learn/evolve. Change has to take place. This means the components of each neuron have to be altered. But random altering would take forever to run through all the possible permutations. We really don't want to wait a few billion years for this to happen. So we need the second part as well:

Scoring. Basically, this is how the neurons know if they are doing well or not. Think of it as positive and negative reinforcement. As a component does well, it solidifies the value and resists change. As a component does poorly, it demands change to a value that performs better.

In ANNE, scoring takes place on two levels. The first is on an ongoing basis for the high-level neuron structure. This means that there are 10000 neuron structures available to choose from. Each of the 10000 structures are scored on a continuous basis. As neuron performance demands change, a neuron can pull a new structure from those available. Each structure keeps a score of how well it does. Eventually, only the structures that do well will be selected by neurons needing a new structure. This is the evolutionary process. The strong neuron structures come out ahead and eventually will replace mos tor all of the weaker neuron structures.

The second level is at the component level. Each component has a default value, which is temporarily stored as an "old value". This old value is scored for a period of time. Then a "new value" is generated, which is simply a nudge in either the + or - direction. The new value is scored for the same period of time. Then the scores a compared. If the new value scored better, then the new value replaces the old value. Otherwise, the old value stays in place.

This process is called "training" and is done by "Instructor Neurons" (or INeurons). There are only a few Instructor Neurons at any given time because of the resources required in the training process. Some INeurons train for short periods of time, some train for medium periods of time, and others train for long periods of time. During the training period, the INeuron will only train a single neuron. Once training is complete, the INeuron moves on to another neuron for training. The process of training an entire network can take some time, so more INeurons are better. But the more INeurons used, the more resources are used as well. So a balance must be found depending on the network size and resources available.

The first phase of the project is to have a single working neural net and demonstrate its capabilities. The second phase will be to expand the functionality so that multiple neural nets can be implemented at the same time. This way several AI entities can take advantage of the neural net in the same program.

Below are some of the base functions. This is enough for the Neurons and simple net, but the INeurons and evolution functions are not implemented yet. It does not do anything yet, but does show the work in progress (and lots of remarks).




Open MMORPG: It's your game!
Raven
19
Years of Service
User Offline
Joined: 23rd Mar 2005
Location: Hertfordshire, England
Posted: 13th Jul 2007 10:07
That was a rather lengthy explaination.
You could've just said "This is some cool learning AI"

FROGGIE!
20
Years of Service
User Offline
Joined: 4th Oct 2003
Location: in front of my computer
Posted: 13th Jul 2007 14:54
Sounds interesting, hopefully youll get a working demo/example up soon.
Zotoaster
19
Years of Service
User Offline
Joined: 20th Dec 2004
Location: Scotland
Posted: 13th Jul 2007 16:37
I was just talking to The Nerd about this yesterday, it's looking pretty cool.

"It's like floating a boat on a liquid that I don't know, but I'm quite happy to drink it if I'm thirsty enough" - Me being a good programmer but sucking at computers
wildbill
18
Years of Service
User Offline
Joined: 14th Apr 2006
Location:
Posted: 13th Jul 2007 16:44
Talk about coincidence, I was just reading about Neural Nets to control AI for resource gathering in a RTS.
GatorHex
19
Years of Service
User Offline
Joined: 5th Apr 2005
Location: Gunchester, UK
Posted: 13th Jul 2007 17:55 Edited at: 13th Jul 2007 17:56
You're making a rod for your own back if you don't use an OO language for this.

The best introduction to AI I've ever seen is the match box tic-tac-toe (naughts & crosses) example. You don't even need a computer

DinoHunter (still no nVidia compo voucher!), CPU/GPU Benchmark, DarkFish Encryption DLL, War MMOG (WIP), 3D Model Viewer
Jeff Miller
19
Years of Service
User Offline
Joined: 22nd Mar 2005
Location: New Jersey, USA
Posted: 13th Jul 2007 23:35
If you get it up and running it can be an awesome AI system for games. Good example: Jellyfish, a free backgammon game than runs on a neural net engine.
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 14th Jul 2007 05:38
Quote: "You could've just said "This is some cool learning AI""


"This is some cool learning AI. "

Although, I think if I had just said that and smiley for my graphic, this thread would have gone up in flames.

Quote: "Sounds interesting, hopefully youll get a working demo/example up soon."


I am planning to ahave a first test by the end of this weekend. It may not work - but it will be something to debug... I like debugging.

Quote: "I was just talking to The Nerd about this yesterday, it's looking pretty cool."


I expect it to be very cool. My concern is about speed. "Very cool" and "slow as slug" are not two phrases that work together well. So once the testing starts, optimization begins using performance enhancing drugs

Quote: "You're making a rod for your own back if you don't use an OO language for this."


That "back"ground work has already been done. While it's not OO, the Criterion Coding System and Dynamic Function Engine I have written will do the job.

Quote: "If you get it up and running it can be an awesome AI system for games."


Useable for games is the goal. Either that or SkyNet... I haven't decided yet.


Open MMORPG: It's your game!
Raven
19
Years of Service
User Offline
Joined: 23rd Mar 2005
Location: Hertfordshire, England
Posted: 14th Jul 2007 12:49
Out of interest, have you researched "Fuzzy Logic" at all?
As it's how to mimic a simple random response, but utilising a logical probability of choosing the best response.

It provides a good dynamic for underlying intelligence, when this is combined with a system for expanding the functionality available it can work quite well for mimicing responses.

I have been working on a very limited version of that for an animation system I've been working on. Where the character learns and responds to utilising limbs available to it. It is very cut-down though because I didn't need them to be particularly intelligent, just more learn the best way to use a limb for a given situation. It allowed my IK Skeleton system to be quite adaptive, so rather than having to animate anything.. just provide some form of input and bam! it learns how to deal with it best to get back to it's "idle" state within the laws of motion it has.

Eventually I'll be expanding it though, so that I can use it for AI that can adapt to the players style of play; but will keep things like "common sense" as a more randomisation.

At some point if I ever have free time, it would be nice to create an "evolution" style program with the underlying code similar to Spore. Where the world will evolve, and the script will only expand based on the possible size it can (i.e. brain capacity of the animal). Then just leave it going for a few hours to see how they evolve using it.

See if they create tools, speech, etc. and how the next evolution adds limbs or such to help them with their nature.
More than anything else it seems quite an interesting topic, as you could see how animals could've potencially evolved; ^_^

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 14th Jul 2007 18:54
I have done my research on Fuzzy Logic. And I have plans for F.L. within this engine, as well as for the AI external of this engine. With Fuzzy Logic, a set of parameters is established and the code is basically free to select anything that lies within those parameters. I have made a few "art" programs that draw using this type of system.

One of my more aggressive plans with this AI system is that the Terminal Neurons can call any function that has been written within the included code and scripted with the Dynamic Function Engine. The DFE creates a list of all the functions that have been scripted, so all functions could be used by any terminal neuron. If I set the list of functions as the parameters for the Fuzzy Logic, the Terminal Neurons could select from any of the available functions. With a little nudging from the evolution code and the instructor neurons, the terminal neurons can "learn" which functions to use.

The major challenge in this plan is setting Parameters for... um... well Parameters - the function parameters that is. The DFE already is capable of providing what types of parameters are required by a given function. The problem is setting parameters for the given values. For example, an integer variable parameter basically has 2^32 possible values. This is too wide of a range for any useful fuzzy logic. And string variables are even worse: The combinations are, for practical purposes, infinite.

Of course, some sort of internal function parameter limiter could be set so that only logical input makes through the function and everything else is rejected, but the neurons won't "know" this and would still try non-valid values. So, the only real solution is to set up some external parameter parameters so that the terminal neurons only select parameter values that "make sense." While, for any given function, this is easy enough, it is difficult to do on a generic and global scale.

"But," he said with an evil grin whilst rubbing is hands together as if washing soil and dirt from them, "I do have a plan!"

=========

Both the evolution engine and the instructor neurons will use fuzzy logic for selecting values for the various neural components. At first, the evolution engine will select random "genetic sequences" from the 10000 possible choices. But once there is enough scoring, the evolution engine will only select from the "best", but it will not only select the best one; and once in a while, it will be allowed to select randomly. The same goes for the instructor neurons. The training routine is hard coded - but the specifics will be "fuzzy" (selected from a range within a given set of parameters).

==============

I have thought about creating a "limbed" entity to apply this engine to - I think what you have described would be very very cool. But it is all the background coding for the object with limbs I am not looking forward to. The same goes for the "AI world"; a very cool concept.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 16th Jul 2007 02:19
Here is the Alpha 0.1 version of ANNE. It compiles and has all the functions to do what it is supposed to do.

By itself, the engine doesn't do anything. But the functions can be used as if there is added functionality to DBPro. I have also included a lot of the free work that came out of the Open Source MMORPG project as well: A pretty decent library of functions.

Next - testing and debugging... Any ideas for a simple test?

I was thinking of a "smart cube" with 5 sensors, one for each face of the cube minus the bottom face. The cube would be "rewarded" for not touching anything else and would be "punished" for touching anything (besides the floor). Place the cube in a room with other "dumb" cubes and watch to see if the cube eventually finds a place to rest where it is not touching any of the other cubes. Moving the "dumb" cubes around randomly would test the "intelligence" of the smart cube.

The cube could activate one of 3 functions; Turn_Left(),Turn_Right(), and Move_Forward(). The sensors would be simple collision detection in order to 'stimulate' neurons.


Open MMORPG: It's your game!

Attachments

Login to view attachments
PowerSoft
19
Years of Service
User Offline
Joined: 10th Oct 2004
Location: United Kingdom
Posted: 25th Jul 2007 14:20
Niceeeeee....

The Innuendo's, 4 Piece Indie Rock Band
http://theinnuendos.tk:::http://myspace.com/theinnuendosrock
Visigoth
19
Years of Service
User Offline
Joined: 8th Jan 2005
Location: Bakersfield, California
Posted: 27th Jul 2007 04:53
wow, this is very cool stuff. How long do you think it will take before Anne becomes "self aware?". Just kidding, I couldn't help it . the cube test is a good idea. A path solving routine. I have a question though, maybe you already answered it and I missed it, but once the system learns, does it remember? I'm still reading and rereading these posts to understand it more,but, is this something that could store what it has learned?
tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 27th Jul 2007 05:23
As long as we don't get Simone-like things, this sounds pretty much far too advanced for me. But really, the idea sounds good. The cube idea seems rather useless to me though, if the others move at random, how could you learn from it? If you'd give the cubes an AI and let ANNE find a way to outbeat it, now THAT would be a test.

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 27th Jul 2007 20:00
Tada! Here's the cubes test. I have not run it for very long to see how "smart" it can get, but everything seems to be working fine.

The red cube is the "intelligent" one. The rest are "dummy" cubes - mainly obstacles for the smart cube to learn to avoid. There are two numbers in the upper left corner of the screen. The first is Frame rate. The second is the AI "score" for the current Epoch. Epochs are 1000 frames (you can change this easily). Each Epoch, the AI updates its learning. The better the score, the more likely those neurons settings will be used going forward. The AI is "penalized" for every frame it is colliding with a dummy cube. The AI is "rewarded" for every frame of non-collision.

There are a few possibilities for the smart cube when the code starts:

1) A negative-firing neural-net that is triggering movement. Basically, neurons can, in rare cases, fire all by themselves, causing the smart cube to move around or spin randomly.

2) A "jumpy" cube will move back and forth a lot - either during collisions or continuously (see #1 above).

3) A smooth moving cube. These cubes do not move very much except when collided with. These have a tendency to move in a single direction.

4) A low-movement cube. These cubes start out preferring to stand still and move very little, even when collided with. Even though the neural net might be a buzz with activity - the terminating neurons that activate the movement commands may not be well linked, or are not receiving enough "charge" from the net.

Regardless of which AI you start with, each one should learn and adapt to best "avoid" the dummy cubes.

Thanks for the comments.

Quote: "but once the system learns, does it remember?"


Not at this time. Nor can multiple AI's be included in the same code - only one smart object at a time at this point. However, in the future, these are both panned updates.

Quote: "If you'd give the cubes an AI and let ANNE find a way to outbeat it, now THAT would be a test."


Sounds like a good test. Maybe navigating a maze? Or a race? Hmmm...


Open MMORPG: It's your game!

Attachments

Login to view attachments
Jimpo
19
Years of Service
User Offline
Joined: 9th Apr 2005
Location:
Posted: 27th Jul 2007 20:15
Very impressive stuff. I'm looking forward to seeing how much progress you make with this.

The first time I ran it I got a cube that did nothing at all. The second time, I got a cube that thought it would be cool to viscously spin in circles. The third time, I finally got a cube that properly avoiding the rest.
sp3ng
18
Years of Service
User Offline
Joined: 15th Jan 2006
Location:
Posted: 28th Jul 2007 03:51
ive recently read about ANN, but have decided not to implement it in game as the way things are handled is too complicated for my liking, (like input weights being added to meet a threshold and if they do then perform a function) i like to be able to control AI with more than just numbers, thats why ive decided to use Rule-based AI with a learning implementation


Add Me
Manic
21
Years of Service
User Offline
Joined: 27th Aug 2002
Location: Completely off my face...
Posted: 28th Jul 2007 13:31
You know, I thought you were crazy to try this in DBPro when I first read your post, but you've made some great progress here.

I look forward to seeing two or more AIs react to each other

I don't have a sig, live with it.
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 29th Jul 2007 17:17
Current Status: still debugging. Something is causing a memory leak causing the program to do a solid crash (no error message except that the program unexpectedly terminated). It is not a consistent error, as is how memory leaks are want to be, so it has been difficult to track down. I have narrowed down the likely suspects, so I hope to have this resolved soon. Look for a new update when I do: While I have been debugging, I have made many improvements that failed to fix the bug.


Quote: "Very impressive stuff. I'm looking forward to seeing how much progress you make with this."


Thanks - I am pretty excited about it myself. I've made some improvements. With those improvements, I ran a test all night. By the morning, the smart box was definitely scoring better. I tried to let it run further, but I had to go to work. While I was gone, my niece stopped the program so she could use the computer. Oh well.

Quote: "I got a cube that thought it would be cool to viscously spin in circles."

After a lot of debugging, I figured out that this was not due to a "negative-firing neural-net", but rather caused by a loop. I increased the neural net size to about 200 neurons and the intelligence immediately seemed to improve. One thing I noticed is movement that would dissipate over a short period of time. That's when I realized this was caused by a deteriorating loop charge within the neural net. And where there is a deteriorating loop charge, there can be a stable or increasing loop charge, which explains the spinners and movers.

This prompted a slight visual improvement. I added an activity meter to show how many neurons were firing each frame. What was interesting is that in all tests, less than 10% of the neurons would fire at any given time - leading me to believe that most of the neurons remains, for the most part, unused.

Quote: "i like to be able to control AI with more than just numbers, thats why ive decided to use Rule-based AI with a learning implementation"

I wouldn't recommend a neural net AI for most entities in any game. It is overkill and uses a ton of resources that are probably better spent elsewhere. For a single AI entity that can challenge a player by learning and improving itself, ANN would be my choice. In an FPS game, ANN's would be the bosses, not the drones before the boss. ANN is also the ultimate fuzzy logic. This demo shows that with a short set-up, an ANN can start to figure out how to navigate around obstacles, even in a changing (nearly fluid) environment. A* Pathing can't do that. ANN's can also develop "personalities", like what Jimpo described.

What it comes down to is that there are just some games in which an ANN is either the best choice to challenge the player and other games in which ANN would not be used at all.

Quote: "You know, I thought you were crazy to try this in DBPro when I first read your post, but you've made some great progress here."

Ummm... thanks I am not crazy, I AM a pencil sharpener.
Seriously... thanks for the comments. It could not be done in DBPro without some sort of help. My Dynamic Function Engine and some of the other DBA files included really make the job easy. When we back up and look at the big picture, ANN is a very big programming job. But I did not start out to program ANN. This is a compilation of a lot of other tools and engines. Basically, it is more than DBPro that is being used, but that "more" is written in DBPro.

Think of any project like trying to boil an ocean. It's a very big task and one that can dissuade almost anyone. By boiling one cup at a time, the smaller tasks are not so daunting. It may take take as long or longer, but at least we can see and feel progress.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 29th Jul 2007 19:37 Edited at: 30th Jul 2007 04:18
Two hours of straight debugging paid off. I found the problem from the bug forums here; http://forum.thegamecreators.com/?m=forum_view&t=111094&b=15. The work-around solved the problem.

I've also added some other enhancements. The AI "learns" better now, and will adapt much more than it did before. The AI will now perform more extreme changes early on, and only tweak settings in later epochs. I also tweaked some settings in hopes of seeing more "life-like" movement. The neural net size has been set to about the maximum that will not crash on my PC, but feel free to play with the code and settings.

I am quite happy with the results now; so this is now out of "alpha" and into "beta". So give it a run and let me know what happens.


Open MMORPG: It's your game!

Attachments

Login to view attachments
qwe
20
Years of Service
User Offline
Joined: 3rd Sep 2003
Location: place
Posted: 30th Jul 2007 03:09 Edited at: 30th Jul 2007 03:11
with alpha and beta i get "Failed to 'UnfoldFileDataConstants'"

i use bluegui

in normal DB i get "Constant name 'Factor' cannot share the name of a reserved word or command."
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 30th Jul 2007 04:17
There is probably a conflict with plugin or dll. Do a search in the code for "Factor" and replace with something like "AI_Factor".

Here is an update to the beta. After a bit more debugging, I found that the "learning" part wasn't really learning very much. It's fixed now. I will update the upload two posts up instead of a new upload. It can still crash from some memory leak somewhere, but it is very rare now.


Open MMORPG: It's your game!
Jimmy
20
Years of Service
User Offline
Joined: 20th Aug 2003
Location: Back in the USA
Posted: 30th Jul 2007 08:26
"I'm learneding!"

Pretty cool, Rii! The anti-social behavior of that little red cube has inspired me to go out there and avoid people.

"Oh hey, nice website Jimmy, it's really nice and fancy." -- That C++ Nerd
Visit. Website. NOW!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 30th Jul 2007 09:44
Quote: "The anti-social behavior of that little red cube has inspired me to go out there and avoid people."


If you are posting on this website, odds are... well... you probably already know.

Flip the scoring around to watch the red cube become social?


Open MMORPG: It's your game!
Mr Bigger
19
Years of Service
User Offline
Joined: 31st Jan 2005
Location: was here!
Posted: 30th Jul 2007 23:28
This is interesting and works fairly well.

By epoch 2000 it got about as good at avoiding cubes as it was going to get,dashing for clear areas,following traffic and hanging out in safe zones.It was actually trying to avoid all collisions...so i let it run.
By epoch 10000 it had given up avoiding every single collision and just set there for the most part.It must have figured out it could just sit there and score just as well as running around.

After 10 hours and 11000 epochs,Anne was terminated.

Nice work man!Thank's for sharing.I'm gonna keep an eye on this one.



AMD 2600+/1GB DDR ram/GeForce 6600oc 256MB/W2KPro/DBPro 6.2
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 31st Jul 2007 02:23
Cool run. My most recent test did about the same thing. So, I sat down and identified a few logic opportunities to improve:

->The trainers constantly train with no rest. This means that even when ANNE is nearly "perfect" the trainers are still going at it full force. So I added a random delay that is modified by how many Epochs have passed. The more Epochs, the more "rest" the trainers get. Maybe getting tired in old age is a good thing?

---> Additionally, the constant training meant that if there are 100 instructor neurons, then 100 neurons are always being trained. There is never a class of 50, 20, 10, 5, or even 1, neuron. This Overcrowding of the neuron classroom prevents the neurons from learning how they can best perform individually or in smaller groups. So, if 55 neurons perform poorly vs the 45 that have improved performance, all the neurons fail the class. So the good training get thrown out with the bad. Inversely, bad training would be kept with the good as well. I think this was the major contributor to the Sitting ANNE Syndrom around Epoch 10000.

-> The instructors did not care if a neuron was already in training. This meant that two or more instructors could be training a neuron at the same time. Like in real life, it is very difficult to learn anything valuable trying to take two classes at the same time - overlapping! A long term instructor's training would easily be scewed by a short term instructor. A short term instructor's training would be completely undone by the lond term instructor's training. To solve this problem, I have added a training flag to each neuron. Instructor Neurons check this flag before starting to traing any neuron.

-> The one last opportunity I have not improved yet is that the entire gene pool (10,000 genes) is scored based on the entire net's performance. It has occured to me now that this was a bad idea since the good genes would be scored poorly along with the bad genes. The fix will be to only score genes as they are being trained. If the instructor is comparing two different genes, it will score down the poor performing gene and score up the better performing gene. This will give an isolated view of how each gene compares to all the other genes (once enough Epochs have passed). Once I update this code, I will post the new and improved ANNE (Neo ANNE? ).


Open MMORPG: It's your game!
el zilcho
17
Years of Service
User Offline
Joined: 4th Dec 2006
Location:
Posted: 31st Jul 2007 02:50
darkbasic is having problems calling functions from the other codes..

how do i fix this?
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 31st Jul 2007 10:18
I am not sure I understand the problem. What do you mean by "DarkBasic is having problems..."? Do you mean DarkBasic in general, or are you referring to the ANNEngine? What do you mean by "functions from other codes"? From your own DBPro functions or libraries, or from other languages like C++?

Seeing as I could not (and would not) answer questions regarding problems with DarkBasic or it's lack of ability to read other languages, I will assume your question is regarding the ANNEngine and the ability to read your own functions or function libraries. We will see if the process of elimination works in this case.

The Dynamic Function Engine (DFE) is the library that allows the AI to activate functions dynamically using string variables. In order for the DFE to work, there needs to be a "script" .dba source file created that includes all the functions that can be called by the DFE. In the ANNEngine, this file is the "ANNE Script.dba" file. There are only two functions included in the script source file; _Activate_Function() and _Create_XList(). Both contain all the functions that can be used by the DFE. If you want to add your own function, you can do it manually, or you can use the Criterion Coding System (CCS), which is currently still being Beta tested (the ANNEngine is part of that testing).

The CCS contains a scripting utility, which will read in all the functions from multiple files (that you select) and output the source file similar to the "ANNE Script.dba" file in this project.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 3rd Aug 2007 21:12
Some updates:

I have done a lot of long-term ANNE testing to see what issues there are, which is also very slow in progress. One of the issues is that, as described by Mr Bigger above, the AI eventually just decides to sit still and do nothing. It doesn't seem very "intelligent" (or maybe, as Mr Bigger stated, the AI can do as good or better by sitting still).

Here are my current thoughts on these issues, which I am trying to address:

The Epochs are too short to get an accurate "score" for how well the ANNE has done. This results in too much variablity in scoring. In other words, a poor performing network can accidently score well, while a good performing network can accidently score poorly. There are two possible solutions:
1) Increase the Epochs to long enough terms to minimize the variability.
2) Add in a buffer to ensure that performance exceeds a given threshold to ensure improvement is truley significant for the length of the Epochs.

In the meantime, I have added in a save for the "DNA". Next, I will be adding in a save for the Neurons. Once these are done, we can have several folks try out ANNE and post their results. It may also be possible for me to write a short compilation code that would "mate" two successful ANNE's together and produce an "offspring" that could be smarter than the "parents".


Open MMORPG: It's your game!
tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 3rd Aug 2007 23:48
It's that way because it's random. If you give ANNE, for example, the assignment to reach the finish of a straight track while avoiding blocks that come in four lanes, a block in lane 1, after a second in lane 2, after a second in lane 3, after a second in lane 4, after a second in lane 3, lane 2, lane 1, lane 2, lane 3 ect... You'd actually be able to see if ANNE understands the whole concept. It would go swinging between two lines, if it were smart, or, alternatively, try and swing along.

vorconan
17
Years of Service
User Offline
Joined: 4th Nov 2006
Location: Wales
Posted: 4th Aug 2007 00:40
Seems like a very complex engine you have going here, sounds awesome. I keep getting terminator flashbacks now.


RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 4th Aug 2007 07:59
@tha_rami: That is really one obstical. The reason I like this test is because it is particularly challenging. Howevwer, by George, I think I've got it. I've run several tests in a row now where the resutls are a fairly smart AI cube that can navigate around quite well... something rather akin to sliding collision, much to my surprise.

The first run was a fast moving cube that would turn right quite quickly until it could slide along the obstical that was previously in front of it. Unfortunately, the cube did not seem to be able to turn left. There was either a very weak, or no, neural connection to the left turn neurons. This caused the cube to turn into objects on it's right. Despite this "handicap" the cube regularly scored quite high after figuring out that orbiting an object on it's right was not a great strategy.

The second run produced a similar smart avoidance cube - one that moved slower, but was very adept at, again, something similar to sliding collision. This one also seemed to have a handicap in that the left side was not able sense collisions. I believe the sensing neurons only put out a very weak charge, or were not very well connected to the neural net. Basically, if something approached from the left, the cube just let it keep going without noticing. Other than that, it did a great job.

Quote: "Seems like a very complex engine you have going here, sounds awesome. "

It is... it is. I am pretty excited about it. Especially as I see the AI get smarter quicker.

Quote: "I keep getting terminator flashbacks now."

That was SkyNet. This is ANNE. Totally different.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 6th Aug 2007 19:13 Edited at: 6th Aug 2007 19:17
Here is the latest update of the ANNEngine. It does pretty good as far as learning, for the most part. Feel free to play around with the scoring or the Epochs (how much time to learn before an update). You can adjust the Epoch length by changing this line in the ANNE Main.dba source;

Execute_Epoch(5000)

The 5000 indicates 5000 frames before an update. In theory, the more frames, the better ANNE learns, but the progress is slower. I have added in a buffer range to help minimize random "improvements" when the score just happens to be higher due to natural variation. In other words, there has to be "significant" improvement before ANNE accepts any changes.

I have also made some changes that should prevent the memory leak that was occurring. The problem is that ANNE would not crash every time, just once in a while. So there is no way to be sure without extensive testing. So... please test this.

One last request. If you get a particularly successful ANNE, please post the ANNE.dna file from the ANNE folder as well as a description of ANNE's performance. I would like to take a look at some of these and maybe try some dna combining. The dna file does not ensure ANNE will act the same, but does help ensure that other ANNE's will use successful dna. The neural net still contains a lot of other factors and the dna is only the foundation.


Open MMORPG: It's your game!

Attachments

Login to view attachments
tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 6th Aug 2007 21:22
I just wish you'd make an EXE... I don't have Pro. I could let it run for days if needed, lol.

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 6th Aug 2007 23:31
Maybe someone can upload the .exe. I am at work atm. I can upload this evening if no one else has.


Open MMORPG: It's your game!
vorconan
17
Years of Service
User Offline
Joined: 4th Nov 2006
Location: Wales
Posted: 7th Aug 2007 01:42
Quote: "That was SkyNet. This is ANNE. Totally different."


Not too different, terminator did learn a load of cool phrases, but Skynet is a multi-billion dollar organisation of national defense, so yeah maybe a bit different.



RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 7th Aug 2007 03:11
ANNE is a little ways off from becoming as powerful as SkyNet. I figure that will come when I introduce MMO-ANNE; where lots of folks sign onto an ANNE AI Server and the server uses the resources of all the user's PCs to power several hundred (or thousand) of neurons each. If I can get a few thousand on at the same time, that would be a few billion neurons... that should do it. Yeah... that should do it.


Open MMORPG: It's your game!
vorconan
17
Years of Service
User Offline
Joined: 4th Nov 2006
Location: Wales
Posted: 7th Aug 2007 03:17
Lol, I have no doubt it has the potential, you seem to be the best member here at creating AI.



tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 7th Aug 2007 04:05
No .exe yet...

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 7th Aug 2007 06:38
Here is the exe file.


Open MMORPG: It's your game!

Attachments

Login to view attachments
tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 7th Aug 2007 09:21 Edited at: 7th Aug 2007 09:40
After one night (12hrs) of running, ANNE now runs for the walls, wraps around to the opposite wall and runs back to the first wall indifferently to the white cubes, it seems. It has a tendency for the corners of the 'playfield', I notice the walls deduct points to, so I wonder why she does this...

Mine must be mentally ill, it consequently had scores of -5200 by staying inside the walls... Or, it is so advanced it has developed a rebellious attitude towards the domininant user and will start resisting things - eventually locking my laptop and using it to hack all human networks to shutdown all computers and install ANNE on that.

I'm using the EXE version.

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 7th Aug 2007 14:45
There are two learning patterns for ANNE. The first is learning which base "DNA" seeds work best and which ones don't. The second is individual neuron learning, which is done by tweaking the individual DNA settings within each Neuron, but not making any major changes. One thing ANNE needs to do is learn which "DNA" seeds are good and which ones are not. Think of "DNA" learning as long term evolution. For this reason, every 10th Epoch, the DNA scores are saved to the ANNE.dna file.

Eventually, ANNE needs to stabilize and stop learning so we can observe ANNE's "mature" performance. So, ANNE has a learning curve where it starts out learning a lot and eventually learning is reduced to a bare minimum. At that point, ANNE needs to "die" and start over.

When you stop ANNE and run it again, the DNA scores are loaded from the ANNE.dna file. As the new ANNE starts to learn, it will have a preference for the higher scoring DNA seeds: In your case, pretty much all the DNA that was not selected during the first run. On the next run, ANNE should do a little better; the next run, better, and so on.

The next phase is to have ANNE "die" and start over again automatically if ANNE continually receives poor scores; something like 10 Epochs in a row with scores in the bottom 20th percentile of the entire score range. So, if ANNE has a score range of -500 to +500, and receives 10 scores in a row of -300 or less, then ANNE would become manic-depressive, commit suicide, and start over.

After that, it's rewrite ANNE to optimize the memory usage by using memblocks instead of variable arrays - think of each ANNE being the size of a large image - and just as complex. This will allow several ANNEs to run simultaneously, just like you can have several images loaded at the same time. Then ANNEs can "mate" and produce smarter offspring by combining the results of the more successful ANNEs.


Open MMORPG: It's your game!
tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 7th Aug 2007 19:53
Well, my ANNE scored -5000, not -500. I'd say that's reason enough to become suicidally enraged.

dab
19
Years of Service
User Offline
Joined: 22nd Sep 2004
Location: Your Temp Folder!
Posted: 8th Aug 2007 02:12
I let this run for about 40minutes.

Attachments

Login to view attachments
tha_rami
18
Years of Service
User Offline
Joined: 25th Mar 2006
Location: Netherlands
Posted: 8th Aug 2007 05:59
By the way RiiDii, I'm freaked since you said 'mating'. For some reason, I envision endless red cubes spinning around on my screen, lol.

Love the project, is really cool

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 8th Aug 2007 08:58 Edited at: 8th Aug 2007 08:58
Some major improvements. ANNE now evolves more over time to get smarter. Here are a couple of videos showing two ANNEs that were taught in the last 24 hours or so. Both consistently scored well over 500.

This first one is a mover and does something similar to sliding collision.
http://s180.photobucket.com/albums/x58/riidii/?action=view¤t=Anne.flv

The second one is a spinner (tough to see, but trust me) and it pushes itself away out of tight spots. This is not the best spinner I have seen, but it still did quite well.
http://s180.photobucket.com/albums/x58/riidii/?action=view¤t=Anne2.flv

PS. Sorry for the crappy video quality. For lack of preparation or any better ideas, I grabbed my camera phone and started recording.


Open MMORPG: It's your game!
dab
19
Years of Service
User Offline
Joined: 22nd Sep 2004
Location: Your Temp Folder!
Posted: 8th Aug 2007 18:53
Wow. that first was amazing. I had one that sort of did that, but my computer crashed (not from anne, my cousin's kid was typing randomly on the keyboard and some how did it).
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 9th Aug 2007 18:27 Edited at: 9th Aug 2007 18:30
Quote: "Wow. that first was amazing."

It certainly does a great job. The second one I like because it figured out that by spinning, it could compensate for its limited sensing capabilities. Instead of only four directions, by spinning, it could sense in all directions.

Since this AI test was successful, I am putting ANNE to a slightly tougher challenge now. Instead of only four binary sensors (collision or not), I am giving ANNE 8 sensors that sense the range of the objects withing the range of the sensors. I will post the code (and .exe) tonight.


Open MMORPG: It's your game!
Accoun
18
Years of Service
User Offline
Joined: 9th Jan 2006
Location: The other end of the galaxy...
Posted: 9th Aug 2007 19:01
Nice.
Run to the hills ANNE!

Make games, not war.

Dr Manette
18
Years of Service
User Offline
Joined: 17th Jan 2006
Location: BioFox Games hq
Posted: 10th Aug 2007 03:34
Later this year

"I'm now going to give anne the capability to kill. Every second it is not killing it loses points." *Post video of 24 hour training*

Very cool, RiDii, I'm very excited as to where this could go.

Login to post a reply

Server time is: 2024-05-20 12:51:21
Your offset time is: 2024-05-20 12:51:21