Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beautification #4

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 7 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# BioPy

####Overview:
#### Overview:
----
BioPy is a collection (in-progress) of biologically-inspired algorithms written in Python. Some of the algorithms included are more focused on artificial model's of biological computation, such as Hopfield Neural Networks, while others are inherently more biologically-focused, such as the basic genetic programming module included in this project. Use it for whatever you like, and please contribute back to the project by cleaning up code that is here or contributing new code for
applications in biology that you may find interesting to program.

NOTE: The code is currently messy in some places. If you want to make a Pull Request by tidying up the code, that would certainly be merged. Since most of this was written while in the middle of our (jaredmichaelsmith and davpcunn) Graduate Bio-Inspired computation course, there are some places where the code has diverged into a dark chasm of non-pythonic mess, despite the algorithms still performing very well. Contributions in this area are much appreciated!

####Dependencies:
#### Dependencies:
----
- NumPy
- SciPy
- Scikit-Learn
- Matplotlib

####Categories
#### Categories
----
Below you will find several categories of applications in this project.

#####Neural Networks:
##### Neural Networks:
----
- Hopfield Neural Network
- Back Propagation Neural Network
Expand All @@ -32,17 +32,16 @@ Below you will find several categories of applications in this project.
- Labelled Faces in the Wild Dataset: http://vis-www.cs.umass.edu/lfw/
- From scikit-learn package, originally collected by the University of Mass. Amherst

#####Genetic Programming:
##### Genetic Programming:
----
- Basic Genetic Computation Algorithm
- Features:
- "drag-n-drop" fitness functions
- crossover and mutation of genes
- learning ability for offspring of each generation

#####Self-Organization and Autonomous Agents
##### Self-Organization and Autonomous Agents
----
- Particle Swarm Optimization
- Features:
- You can configure many of the parameters of the algorithm such as the velocity, acceleration, and number of initial particles, as well as several other parameters.

- You can configure many of the parameters of the algorithm such as the velocity, acceleration, and number of initial particles, as well as several other parameters.
19 changes: 17 additions & 2 deletions selforganization/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,19 @@
Self-Organization
====
- Author: David Cunningham
- Particle Swarm Optimization
Code Contributor : David Cunningham
- [Particle Swarm Optimization](https://ieeexplore.ieee.org/document/488968)
Authors: James Kennedy and Russell Eberhart

Particle Swarm Optimization (PSO) is a population-based optimization algorithm inspired by the social behavior of bird flocking or fish schooling. It was originally proposed by Kennedy and Eberhart in 1995. The algorithm simulates the movement and interaction of a group of particles in a search space to find the optimal solution to a given optimization problem.

In PSO, each particle represents a potential solution to the problem and moves through the search space by adjusting its position and velocity. The particles "swarm" towards better regions of the search space based on their own experience and the collective knowledge of the swarm. The algorithm iteratively updates the particles' positions and velocities using two main components: personal best (pbest) and global best (gbest).

The personal best represents the best position that a particle has found so far in its search history. It is the position where the particle achieved its best objective function value. The global best represents the best position found by any particle in the entire swarm. It represents the overall best solution discovered by the swarm.

During each iteration of the algorithm, particles update their velocities and positions based on their current positions, velocities, pbest, and gbest. The update formula takes into account the particle's previous velocity, its attraction towards its pbest, and its attraction towards the gbest. By adjusting their velocities and positions, particles explore the search space and converge towards promising regions that are likely to contain the optimal solution.

PSO is known for its simplicity and ease of implementation. It has been successfully applied to a wide range of optimization problems, including continuous, discrete, and combinatorial problems. It is particularly effective in solving problems with non-linear and non-convex objective functions, where traditional optimization techniques may struggle.

However, PSO also has some limitations. It can suffer from premature convergence, where the swarm gets trapped in a suboptimal solution and fails to explore other promising regions. Various modifications and variants of PSO have been proposed to mitigate this issue, such as using adaptive parameters, introducing diversity maintenance mechanisms, or incorporating problem-specific knowledge.

Overall, PSO is a powerful and versatile optimization algorithm that leverages the collective intelligence of a swarm to efficiently explore and exploit the search space. It has found applications in various domains, including engineering, finance, data mining, and machine learning.
94 changes: 82 additions & 12 deletions selforganization/pso/Problems.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,88 @@
import math

def mdist(maxX, maxY):
return math.sqrt(maxX**2 + maxY**2/2)
'''
Calculate the Manhattan distance from the origin to the maximum coordinates.

Parameters:
maxX (float): Maximum X-coordinate.
maxY (float): Maximum Y-coordinate.

Returns:
float: Manhattan distance.

Example:
mdist(5, 8) # Returns: 9.899494936611665
'''
return math.sqrt(maxX ** 2 + maxY ** 2 / 2)

def pdist(px, py):
return math.sqrt((px-20)**2 + (py-7)**2)
'''
Calculate the Euclidean distance between the given point and (20, 7).

Parameters:
px (float): X-coordinate of the point.
py (float): Y-coordinate of the point.

Returns:
float: Euclidean distance.

Example:
pdist(10, 3) # Returns: 12.083045973594572
'''
return math.sqrt((px - 20) ** 2 + (py - 7) ** 2)

def ndist(px, py):
return math.sqrt((px+20)**2 + (py+7)**2)
'''
Calculate the Euclidean distance between the given point and (-20, -7).

Parameters:
px (float): X-coordinate of the point.
py (float): Y-coordinate of the point.

Returns:
float: Euclidean distance.

Example:
ndist(-10, -3) # Returns: 12.083045973594572
'''
return math.sqrt((px + 20) ** 2 + (py + 7) ** 2)

def Problem1(pos, maxes):
return 100*(1-pdist(pos[0], pos[1])/mdist(maxes[0], maxes[1]))
'''
Solve Problem 1 based on the given position and maximum coordinates.

Parameters:
pos (list): List containing X and Y coordinates of the position.
maxes (list): List containing maximum X and Y coordinates.

Returns:
float: Solution to Problem 1.

Example:
Problem1([10, 5], [5, 8]) # Returns: 100.0
'''
return 100 * (1 - pdist(pos[0], pos[1]) / mdist(maxes[0], maxes[1]))

def Problem2(pos, maxes):
pd = pdist(pos[0], pos[1])
nd = ndist(pos[0], pos[1])
md = mdist(maxes[0], maxes[1])

ret = 9*max(0, 10-pd**2)
ret+=10*(1-pd/md)
ret+=70*(1-nd/md)
return ret
'''
Solve Problem 2 based on the given position and maximum coordinates.

Parameters:
pos (list): List containing X and Y coordinates of the position.
maxes (list): List containing maximum X and Y coordinates.

Returns:
float: Solution to Problem 2.

Example:
Problem2([10, 5], [5, 8]) # Returns: 89.89827550319647
'''
pd = pdist(pos[0], pos[1])
nd = ndist(pos[0], pos[1])
md = mdist(maxes[0], maxes[1])

ret = 9 * max(0, 10 - pd ** 2)
ret += 10 * (1 - pd / md)
ret += 70 * (1 - nd / md)
return ret