Tutorial#
This basic tutorial is aimed at walking you through the different parts of CIFY (Computational Intelligence Framework for pYthon). Throughout the tutorial you will see practical examples represented through blocks of code. These blocks of code are verified during the documentation processing and will always be up-to-date with the referenced version of CIFY. By the end of the tutorial, you will have built your first algorithm in CIFY.
Position#
The vectors within a population-based optimization algorithm (such as evolutionary and swarm-intelligence algorithms) represent the possible solutions to the current optimization problem. These “candidate solutions” are locations within the problem search space which the optimization problem is currently evaluating. Candidate solutions in CIFY are represented by the Position
class.
[1]:
from cify import Position
position = Position([1, 2, 3, 4, 5])
print(position)
[1 2 3 4 5] -> None
In the code above, we have just created our first Position
. Alternatively, we could have created the position from a numpy array.
[2]:
import numpy as np
position = Position(np.array([1, 2, 3, 4, 5]))
print(position)
[1 2 3 4 5] -> None
The Position
class uses numpy to store the decision vector and will convert any list-type inputs into numpy arrays. Notice the None
on the right hand side of the output — this is the objective function value of the decision vector. Since we hane not yet evaluated the decision vector, the value of the position is None
. Let’s define a function to evaluate the position.
[3]:
f = lambda vector: np.sum(vector ** 2)
position.eval(f)
[3]:
55
position(f)
is also a valid approach to evaluating the decision vector. Now, let’s inspect the value of the position.
[4]:
position.value
[4]:
55
Modifications to the position reset the objective function value since that value is no longer a result of the decision vector.
[5]:
print(position)
position = position + 1
print(position)
[1 2 3 4 5] -> 55
[2 3 4 5 6] -> None
The Position
class supports arithmetic operators like +
, -
, *
and /
, as well as comparison operators like >
, <
, =>
and <=
. For example:
[6]:
a = Position(np.random.uniform(0.0, 1.0, 10))
b = Position(np.random.uniform(0.0, 1.0, 10))
b += 1
a(f)
b(f)
a < b
[6]:
True
Objective Function#
The second class we’ll look at is the ObjectiveFunction
class which represents a function to be optimized.
[7]:
from cify import ObjectiveFunction, Optimization
f = lambda vector: np.sum(vector ** 2)
bounds = [0.0, 1.0]
dim = 10
sphere_of = ObjectiveFunction(f, bounds, dim, Optimization.Min, "sphere")
print(sphere_of)
Minimizing: sphere
Bounds: [[0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]
We could have also initialized the same ObjectiveFunction
as follows:
[8]:
def sphere(vector):
return np.sum(vector ** 2)
sphere_of = ObjectiveFunction(sphere, [0, 1], 5, Optimization.Min)
print(sphere_of)
Minimizing: sphere
Bounds: [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
Notice that if a name is not passed to the ObjectiveFunction
on initialization, then the name of the function is used.
We can also use an ObjectiveFunction
to create a Position
where the Position
class will uniformly sample a vector from the bounds of the optimization problem.
[9]:
position = Position(sphere_of)
print(position)
[0.16572603 0.60590114 0.81778646 0.25447678 0.98203471] -> 2.0925066062457716
Optimization#
In the previous section, we briefly glossed over the Optimization
in sphere_of = ObjectiveFunction(f, bounds, dim, Optimization.Min, "sphere")
. Optimization
is an enum
(with Min
and Max
) in CIFY used to represent minimization and maximization. Optimization
is used for all comparison functions in CIFY. For example
[10]:
a = np.random.uniform(0.0, 0.5)
b = np.random.uniform(0.5, 1.0)
print(a)
print(b)
opt = Optimization.Min
opt.cmp(a, b)
0.22815837251598997
0.5115439199645249
[10]:
True
We can also use Optimization
to return the better value for that optimization type.
[11]:
opt.best(a, b)
[11]:
0.22815837251598997
The next sections discuss classes that are designed for convenience when building and running algorithms in CIFY.
Algorithm#
CIFY provides a minimal class for implementing algorithms, i.e. Algorithm
. The only method that you need to implement is iterate
which represents a single iteration of an algorithm. Let’s implement a genetic algorithm using the methods provided by the CIFY
ga package.
[12]:
from cify import Algorithm
from cify.ga import mutate, top, uniform_crossover
class GA(Algorithm):
def __init__(self, n: int,
f: ObjectiveFunction,
pc: float = 0.5,
pm: float = 0.5,
ms: float = 0.15):
"""
pc: probability of crossover (favoring parent a)
pm: probability of mutation
ms: mutation severity, e.g +- 15%.
"""
super().__init__()
self.individuals = [Position(f) for _ in range(n)]
self.pm = pm
self.pc = pc
self.ms = ms
def iterate(self, f: ObjectiveFunction):
n = len(self.individuals) // 2
elite = top(n, self.individuals, f.opt)
next_gen = []
for parent_a in elite:
parent_b_idx = int(np.random.uniform(0, len(elite) - 1))
parent_b = elite[parent_b_idx]
child_1, child_2 = uniform_crossover(parent_a, parent_b, self.pc)
child_1 = mutate(child_1, self.pm, self.ms)
child_2 = mutate(child_1, self.pm, self.ms)
child_1(f)
child_2(f)
next_gen.append(child_1)
next_gen.append(child_2)
self.individuals = next_gen
Task#
The last CIFY class to discuss is Task
. Task
is used as a convenient way to run your algorithms. Let’s use our GA
to optimize the sphere
function.
[13]:
from cify import Task
ga = GA(30, sphere_of)
task = Task(ga, sphere_of, max_iterations=1000, log_iterations=100)
task.run()
09:26:55 INFO GA Minimizing sphere [10.00%]: 0.161 -- Iterations: 100/1000, Evaluations: 3031, Time Taken: 0.101s, ETA: 0.911s
09:26:55 INFO GA Minimizing sphere [20.00%]: 0.096 -- Iterations: 200/1000, Evaluations: 6031, Time Taken: 0.162s, ETA: 0.647s
09:26:55 INFO GA Minimizing sphere [30.00%]: 0.044 -- Iterations: 300/1000, Evaluations: 9031, Time Taken: 0.220s, ETA: 0.514s
09:26:55 INFO GA Minimizing sphere [40.00%]: 0.047 -- Iterations: 400/1000, Evaluations: 12031, Time Taken: 0.280s, ETA: 0.420s
09:26:55 INFO GA Minimizing sphere [50.00%]: 0.182 -- Iterations: 500/1000, Evaluations: 15031, Time Taken: 0.338s, ETA: 0.338s
09:26:55 INFO GA Minimizing sphere [60.00%]: 0.046 -- Iterations: 600/1000, Evaluations: 18031, Time Taken: 0.394s, ETA: 0.263s
09:26:55 INFO GA Minimizing sphere [70.00%]: 0.040 -- Iterations: 700/1000, Evaluations: 21031, Time Taken: 0.456s, ETA: 0.196s
09:26:55 INFO GA Minimizing sphere [80.00%]: 0.122 -- Iterations: 800/1000, Evaluations: 24031, Time Taken: 0.516s, ETA: 0.129s
09:26:55 INFO GA Minimizing sphere [90.00%]: 0.025 -- Iterations: 900/1000, Evaluations: 27031, Time Taken: 0.574s, ETA: 0.064s
09:26:55 INFO GA Minimizing sphere [100.00%]: 0.031 -- Iterations: 1000/1000, Evaluations: 30031, Time Taken: 0.634s, ETA: 0.000s
The Task
class logs useful information, like the name of the algorithm, the name of the objective function, the last evaluated objective value, the number of iterations, the number of evaluations, the time taken and an ETA for when the run will complete. We can also define a metric to evaluate the performance of our GA
. Our metric will track the best objective function value at the end of each iteration.
[16]:
def metric(ga: GA, f: ObjectiveFunction) -> float:
return sorted(ga.individuals)[0]
ga = GA(30, sphere_of)
task = Task(ga, sphere_of, max_iterations=1000, log_iterations=100, metrics=[("best_of_value", metric)])
task.run()
print(task.results["best_of_value"][-1])
09:28:55 INFO GA Minimizing sphere [10.00%]: 0.039 -- Iterations: 100/1000, Evaluations: 63151, Time Taken: 0.075s, ETA: 0.674s
09:28:55 INFO GA Minimizing sphere [20.00%]: 0.069 -- Iterations: 200/1000, Evaluations: 66151, Time Taken: 0.134s, ETA: 0.535s
09:28:55 INFO GA Minimizing sphere [30.00%]: 0.050 -- Iterations: 300/1000, Evaluations: 69151, Time Taken: 0.198s, ETA: 0.461s
09:28:56 INFO GA Minimizing sphere [40.00%]: 0.038 -- Iterations: 400/1000, Evaluations: 72151, Time Taken: 0.261s, ETA: 0.391s
09:28:56 INFO GA Minimizing sphere [50.00%]: 0.071 -- Iterations: 500/1000, Evaluations: 75151, Time Taken: 0.329s, ETA: 0.329s
09:28:56 INFO GA Minimizing sphere [60.00%]: 0.067 -- Iterations: 600/1000, Evaluations: 78151, Time Taken: 0.391s, ETA: 0.261s
09:28:56 INFO GA Minimizing sphere [70.00%]: 0.073 -- Iterations: 700/1000, Evaluations: 81151, Time Taken: 0.457s, ETA: 0.196s
09:28:56 INFO GA Minimizing sphere [80.00%]: 0.095 -- Iterations: 800/1000, Evaluations: 84151, Time Taken: 0.517s, ETA: 0.129s
09:28:56 INFO GA Minimizing sphere [90.00%]: 0.080 -- Iterations: 900/1000, Evaluations: 87151, Time Taken: 0.582s, ETA: 0.065s
09:28:56 INFO GA Minimizing sphere [100.00%]: 0.096 -- Iterations: 1000/1000, Evaluations: 90151, Time Taken: 0.646s, ETA: 0.000s
[-0.01738754 -0.11363236 0.08984333 -0.03242385 0.00879912] -> 0.0224151960273968
Lastly, you can also run algorithms in a style similar to pytorch. This gives you more granular control over how the algorithm should be run. For example, if we wanted to update the bounds of the objective function every 100 iterations we could write
[17]:
task.start()
while not task.stopping_condition():
ga.iterate(task.f)
if task.iteration % 100 == 0:
task.f.bounds = [task.iteration / 10, task.iteration / 100] * dim
task.next_iteration()
task.end()
conclusion#
Thank you for taking the time to read the tutorial. If you have any further questions, please reach out or ask a question on the GitHub discussions page.
[ ]: