Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework pso #112

Merged
merged 9 commits into from
Oct 20, 2022
6 changes: 3 additions & 3 deletions examples/bmf.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ fn main() -> anyhow::Result<()> {
let config = pso::real_pso(
pso::RealProblemParameters {
num_particles: 100,
a: 1.0,
b: 1.0,
c: 1.0,
weight: 1.0,
c_one: 1.0,
c_two: 1.0,
v_max: 1.0,
},
termination::FixedIterations::new(500),
Expand Down
12 changes: 6 additions & 6 deletions param-study/src/instances/pso.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ use crate::{

declare_parameters! {
population_size: u32,
a: f64,
b: f64,
c: f64,
weight: f64,
c_one: f64,
c_two: f64,
v_max: f64,
}

Expand All @@ -30,9 +30,9 @@ pub fn run(setup: &Setup, args: &mut ArgsIter) {
let config = pso::real_pso(
pso::RealProblemParameters {
num_particles: params.population_size,
a: params.a,
b: params.b,
c: params.c,
weight: params.weight,
c_one: params.c_one,
c_two: params.c_two,
v_max: params.v_max,
},
termination::FixedIterations::new(setup.cutoff_length),
Expand Down
5 changes: 2 additions & 3 deletions src/heuristics/aco.rs
Original file line number Diff line number Diff line change
Expand Up @@ -136,13 +136,12 @@ pub fn aco<P: SingleObjectiveProblem>(
}

mod ant_ops {
use rand::distributions::{Distribution, WeightedIndex};

use crate::operators::state::custom_state::PheromoneMatrix;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was it intended to shuffle the imports around here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was probably cargo fmt, which gave me lots of suggestions to shuffle imports.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird, because these are the only changes in this file, and cargo fmt didn't complain before (or else the CI would fail).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know, but somehow it did complain a lot. I don't know what's the issue here.

use crate::{
framework::{components::*, state::State, Individual, Random, SingleObjective},
operators::custom_state::PheromoneMatrix,
problems::tsp::SymmetricTsp,
};
use rand::distributions::{Distribution, WeightedIndex};

#[derive(serde::Serialize)]
pub struct AcoGeneration {
Expand Down
125 changes: 9 additions & 116 deletions src/heuristics/pso.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ use crate::{
/// Parameters for [real_pso].
pub struct RealProblemParameters {
pub num_particles: u32,
pub a: f64,
pub b: f64,
pub c: f64,
pub weight: f64,
pub c_one: f64,
pub c_two: f64,
pub v_max: f64,
}

Expand All @@ -27,9 +27,9 @@ where
{
let RealProblemParameters {
num_particles,
a,
b,
c,
weight,
c_one,
c_two,
v_max,
} = params;

Expand All @@ -39,9 +39,9 @@ where
.update_best_individual()
.do_(pso(
Parameters {
particle_init: pso_ops::PsoStateInitialization::new(v_max),
particle_update: generation::swarm::PsoGeneration::new(a, b, c, v_max),
state_update: pso_ops::PsoStateUpdate::new(),
particle_init: state::state_operators::PsoStateInitialization::new(v_max),
particle_update: generation::swarm::PsoGeneration::new(weight, c_one, c_two, v_max),
state_update: state::state_operators::PsoStateUpdate::new(),
},
termination,
logger,
Expand Down Expand Up @@ -83,110 +83,3 @@ where
})
.build_component()
}

#[allow(clippy::new_ret_no_self)]
pub mod pso_ops {
use crate::problems::SingleObjectiveProblem;
use crate::{
framework::{components::*, state::State, Individual},
operators::custom_state::PsoState,
problems::{LimitedVectorProblem, Problem},
};
use rand::Rng;

#[derive(Debug, serde::Serialize)]
pub struct PsoStateInitialization {
v_max: f64,
}
impl PsoStateInitialization {
pub fn new<P: Problem>(v_max: f64) -> Box<dyn Component<P>>
where
P: SingleObjectiveProblem<Encoding = Vec<f64>> + LimitedVectorProblem<T = f64>,
{
Box::new(Self { v_max })
}
}
impl<P> Component<P> for PsoStateInitialization
where
P: SingleObjectiveProblem<Encoding = Vec<f64>> + LimitedVectorProblem<T = f64>,
{
fn initialize(&self, _problem: &P, state: &mut State) {
// Initialize with empty state to satisfy `state.require()` statements
state.insert(PsoState {
velocities: vec![],
bests: vec![],
global_best: Individual::<P>::new_unevaluated(Vec::new()),
})
}

fn execute(&self, problem: &P, state: &mut State) {
let population = state.population_stack_mut::<P>().pop();
let rng = state.random_mut();

let velocities = population
.iter()
.map(|_| {
(0..problem.dimension())
.into_iter()
.map(|_| rng.gen_range(-self.v_max..=self.v_max))
.collect::<Vec<f64>>()
})
.collect::<Vec<Vec<f64>>>();

let bests = population.to_vec();

let global_best = bests
.iter()
.min_by_key(|i| Individual::objective(i))
.cloned()
.unwrap();

state.population_stack_mut().push(population);

state.insert(PsoState {
velocities,
bests,
global_best,
});
}
}

/// State update for PSO.
///
/// Updates best found solutions of particles and global best in [PsoState].
#[derive(Debug, serde::Serialize)]
pub struct PsoStateUpdate;
impl PsoStateUpdate {
pub fn new<P: Problem>() -> Box<dyn Component<P>>
where
P: Problem<Encoding = Vec<f64>> + LimitedVectorProblem<T = f64>,
{
Box::new(Self)
}
}
impl<P> Component<P> for PsoStateUpdate
where
P: Problem<Encoding = Vec<f64>> + LimitedVectorProblem<T = f64>,
{
fn initialize(&self, _problem: &P, state: &mut State) {
state.require::<PsoState<P>>();
}

fn execute(&self, _problem: &P, state: &mut State) {
let population = state.population_stack_mut().pop();
let mut pso_state = state.get_mut::<PsoState<P>>();

for (i, individual) in population.iter().enumerate() {
if pso_state.bests[i].objective() > individual.objective() {
pso_state.bests[i] = individual.clone();

if pso_state.global_best.objective() > individual.objective() {
pso_state.global_best = individual.clone();
}
}
}

state.population_stack_mut().push(population);
}
}
}
2 changes: 1 addition & 1 deletion src/operators/archive.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
//! Archiving methods

use crate::operators::state::custom_state::ElitistArchiveState;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing as here with the imports.

use crate::{
framework::{components::*, state::State},
operators::custom_state::ElitistArchiveState,
problems::SingleObjectiveProblem,
};
use serde::{Deserialize, Serialize};
Expand Down
78 changes: 1 addition & 77 deletions src/operators/generation/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ use serde::Serialize;

pub mod mutation;
pub mod recombination;
pub mod swarm;

/// Specialized component trait to generate a new population from the current one.
///
Expand Down Expand Up @@ -154,80 +155,3 @@ where
*population = self.random_spread(problem, state.random_mut(), population_size);
}
}

pub mod swarm {
use rand::distributions::Uniform;
use rand::Rng;

use crate::{
framework::{components::*, state::State, Individual, Random},
operators::custom_state::PsoState,
problems::SingleObjectiveProblem,
};

/// Applies the PSO specific generation operator.
///
/// Requires [PsoStateUpdate][crate::heuristics::pso::pso_ops::PsoStateUpdate].
#[derive(serde::Serialize)]
pub struct PsoGeneration {
pub a: f64,
pub b: f64,
pub c: f64,
pub v_max: f64,
}
impl PsoGeneration {
pub fn new<P>(a: f64, b: f64, c: f64, v_max: f64) -> Box<dyn Component<P>>
where
P: SingleObjectiveProblem<Encoding = Vec<f64>>,
{
Box::new(Self { a, b, c, v_max })
}
}
impl<P> Component<P> for PsoGeneration
where
P: SingleObjectiveProblem<Encoding = Vec<f64>>,
{
fn initialize(&self, _problem: &P, state: &mut State) {
state.require::<PsoState<P>>();
}

fn execute(&self, _problem: &P, state: &mut State) {
let &Self { a, b, c, v_max } = self;

let mut offspring = Vec::new();
let mut parents = state.population_stack_mut::<P>().pop();

let rng = state.random_mut();
let rng_iter = |rng: &mut Random| {
rng.sample_iter(Uniform::new(0., 1.))
.take(parents.len())
.collect::<Vec<_>>()
};

let rs = rng_iter(rng);
let rt = rng_iter(rng);

let pso_state = state.get_mut::<PsoState<P>>();

for (i, x) in parents.drain(..).enumerate() {
let mut x = x.into_solution();
let v = &mut pso_state.velocities[i];
let xl = pso_state.bests[i].solution();
let xg = pso_state.global_best.solution();

for i in 0..v.len() {
v[i] = a * v[i] + b * rs[i] * (xg[i] - x[i]) + c * rt[i] * (xl[i] - x[i]);
v[i] = v[i].clamp(-v_max, v_max);
}

for i in 0..x.len() {
x[i] = (x[i] + v[i]).clamp(-1.0, 1.0);
}

offspring.push(Individual::<P>::new_unevaluated(x));
}

state.population_stack_mut().push(offspring);
}
}
}
95 changes: 95 additions & 0 deletions src/operators/generation/swarm.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
//! Swarm Operators

use crate::operators::state::custom_state::PsoState;
use rand::distributions::Uniform;
use rand::Rng;

use crate::{
framework::{components::*, state::State, Individual, Random},
problems::SingleObjectiveProblem,
};

/// Applies the PSO specific generation operator.
///
/// Requires [PsoStateUpdate][crate::heuristics::pso::pso_ops::PsoStateUpdate].
#[derive(serde::Serialize)]
pub struct PsoGeneration {
/// Inertia weight for influence of old velocity
pub weight: f64,
/// First constant factor for influence of previous best (also called Acceleration coefficient 1)
pub c_one: f64,
/// Second constant factor for influence of global best (also called Acceleration coefficient 2)
pub c_two: f64,
/// Maximum velocity
pub v_max: f64,
}
impl PsoGeneration {
pub fn new<P>(weight: f64, c_one: f64, c_two: f64, v_max: f64) -> Box<dyn Component<P>>
where
P: SingleObjectiveProblem<Encoding = Vec<f64>>,
{
Box::new(Self {
weight,
c_one,
c_two,
v_max,
})
}
}
impl<P> Component<P> for PsoGeneration
where
P: SingleObjectiveProblem<Encoding = Vec<f64>>,
{
fn initialize(&self, _problem: &P, state: &mut State) {
state.require::<PsoState<P>>();
}

fn execute(&self, _problem: &P, state: &mut State) {
let &Self {
weight,
c_one,
c_two,
v_max,
} = self;

let mut offspring = Vec::new();
let mut parents = state.population_stack_mut::<P>().pop();

let rng = state.random_mut();
let rng_iter = |rng: &mut Random| {
rng.sample_iter(Uniform::new(0., 1.))
.take(parents.len())
.collect::<Vec<_>>()
};

// it might be debatable if one should use a vector of different random numbers or of the same
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we could just add a parameter for it. We could even add properly named alternatives to PsoGeneration::new, where this parameter is then automatically set to true or false.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather leave it as is, because it's in my opinion unnecessary to differentiate. I just wanted to leave the comment as a reminder if we want to explain it somewhere.

// both versions exist in the literature
let r_one = rng_iter(rng);
let r_two = rng_iter(rng);

let pso_state = state.get_mut::<PsoState<P>>();

for (i, x) in parents.drain(..).enumerate() {
let mut x = x.into_solution();
let v = &mut pso_state.velocities[i];
let xp = pso_state.bests[i].solution();
let xg = pso_state.global_best.solution();

for i in 0..v.len() {
v[i] = weight * v[i]
+ c_one * r_one[i] * (xp[i] - x[i])
+ c_two * r_two[i] * (xg[i] - x[i]);
v[i] = v[i].clamp(-v_max, v_max);
}

for i in 0..x.len() {
//TODO we will need constraint handling here
x[i] = (x[i] + v[i]).clamp(-1.0, 1.0);
}

offspring.push(Individual::<P>::new_unevaluated(x));
}

state.population_stack_mut().push(offspring);
}
}
Loading