Genetic Algorithms (GAs) are based on concepts borrowed from biology: selection, crossover, and mutation. Quantum Inspired Evolutionary Algorithms (QIEAs) borrow concepts from both biology (selection and mutation) and quantum mechanics (measurement, qubit-style probabilistic encoding, and simple gate-like updates. The key word here is "inspired", QIEAs are not quantum computing. They are quantum by analogy.
The Knapsack Problem
Suppose you have a container like a knapsack or maybe a shipping container. It has a fixed weight capacity, $C$. You also have a collection of objects. Each object has a weight and a value, $({w_i},{v_i})$. We wish to pack the knapsack with the most value we can without exceeding the weight capacity. We can express this as an optimization problem.
\[\max \,\sum\limits_i {{v_i}{x_i}} \]
\[\sum\limits_i {{w_i}{x_i} \leqslant C} \]
\[{x_i} \in \{ 0,1\} \forall i\]
${x_i}$ is an indicator variable that can take on the value of 1 if the item is included in the knapsack and 0 if not. Expressed as above the problem can be thought of as a case of 0-1 integer linear programming. There are commercial and open source software packages for integer linear programming and many heuristic methods like ant colony optimization and genetic algorithms.
Genetic Algorithm Method
Here's a simple GA approach to the optimization problem, partially generated by DeepSeek. Like all heuristics, GAs don't guarantee and optimal solution, i.e. they can get stuck in a sub-optimal solution and it's impossible to tell how close the solution is to the optima.
A possible knapsack solution is represented as vector of integers taking on values of 0 or 1. ${x_i} = 1$ if object i is present in the knapsack and 0 otherwise. The population is a collection of solutions. Fitness is evaluated as the dot product of the values and the solution vector. Selection is the standard GA tournament method, i.e. randomly select two solutions, choose a crossover point and swap portions of the solution vectors. Mutation is a simple flip of a solution value. If a possible solution exceeds the knapsack capacity, it fitness is reduced by half.
import numpy as np import time # Simple Traditional GA for comparison class SimpleGA: def __init__(self, weights: np.ndarray, values: np.ndarray, capacity: int, rng: np.random._generator.Generator, pop_size: int=50, max_gens: int=150) -> None: """ Initialize GA Parameters ---------- weights : np.ndarray weight of each object values : np.ndarray value of each object capacity : int how much weight can knapsack handle rng : np.random._generator.Generator random number generator pop_size : int, optional size of evolutionary population by default 50 max_gens : int, optional how many generations to run, by default 150 """ self.weights = weights self.values = values self.capacity = capacity self.pop_size = pop_size self.max_gens = max_gens self.num_items = len(weights) self.rng = rng def run(self) -> tuple[np.ndarray, float]: """ Execute the GA Returns ------- tuple[np.ndarray, float] the population element with the best solution, the fitnss of the best solution """ print("Starting Genetic Algorithm for 0/1 Knapsack") print(f"Items: {self.num_items}, Capacity: {self.capacity}") print("-" * 60) # Initialize population pop = self.rng.integers(0, 2, (self.pop_size, self.num_items)) best_solution = pop[0].copy() best_fitness = 0 for gen in range(self.max_gens): # Evaluate fitness = [] for sol in pop: weight = np.dot(sol, self.weights) value = np.dot(sol, self.values) if weight > self.capacity: value *= 0.5 # Penalty fitness.append(value) # Find best idx = np.argmax(fitness) if fitness[idx] > best_fitness: best_fitness = fitness[idx] best_solution = pop[idx].copy() # Selection and crossover new_pop = [] for _ in range(self.pop_size): # Tournament selection p1, p2 = self.rng.choice(self.pop_size, 2, replace=False) parent1, parent2 = pop[p1], pop[p2] # Crossover crossover_point = self.rng.integers(1, self.num_items-1) child = np.concatenate([parent1[:crossover_point], parent2[crossover_point:]]) # Mutation for i in range(self.num_items): if self.rng.random() < 0.01: child[i] = 1 - child[i] new_pop.append(child) pop = np.array(new_pop) return best_solution, best_fitness def main() -> None: seed = 42 # seed = int(time.time()) # uncomment to change to more random seed rng = np.random.default_rng(seed) num_items = 50 weights = rng.integers(1, 50, num_items) values = rng.integers(10, 100, num_items) capacity = int(0.6 * np.sum(weights)) # Run traditional GA ga = SimpleGA(weights, values, capacity, rng) ga_solution, ga_fitness = ga.run() ga_value = np.dot(ga_solution, values) ga_weight = np.dot(ga_solution, weights) print("\n" + "="*60) print("GA SOLUTION") print("="*60) print(f"Total Value: {ga_value:.2f}") print(f"Total Weight: {ga_weight:.2f} / {capacity}") print(f"Capacity Utilization: {ga_weight/capacity*100:.1f}%") if __name__ == "__main__": main()
Running this code on the randomly generated problem produces the following.
Starting Genetic Algorithm for 0/1 Knapsack Items: 50, Capacity: 813 ------------------------------------------------------------ ============================================================ GA SOLUTION ============================================================ Total Value: 2037.00 Total Weight: 758.00 / 813 Capacity Utilization: 93.2%
Different seeds will produce different solutions.
QIEA
Quantum Inspired Evolutionary Algorithms incorporate concepts inspired by quantum mechanics into evolutionary algorithms. QIEAs extend the representation of possible solutions to include probabilistic models rather than just binary or numerical models. Instead, solutions are collections of qubits. In this case qubits are inspired by quantum computing's concept of qubits.
A QIEA qubit is a vector of probability values inspired by the idea of quantum superposition. In the knapsack problem, the cubits are a combination of 0 and 1 states. In quantum terms,
\[\left| \psi \right\rangle = \alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle \]
\[\left| {{\alpha ^2}} \right| + \left| {{\beta ^2}} \right| = 1\]
$\left| {{\alpha ^2}} \right|$ is the probability of the cubit being in state 0 and $\left| {{\beta ^2}} \right|$ is the probability of it being in state 1.
The population of solutions to the knapsack problem consists of vectors where each element is $\left( {{\alpha _i},{\beta _i}} \right)$. To calculate fitness, we make a measurement of the quantum state. The analogy to a quantum measurement is to sample the new stste from the probability vector. To do this, generate a random number. If the number is less than the value of ${\beta_i ^2}$, the item i is included in the solution, i.e. ${x_i}=1$. Otherwise, the item is not included, ${x_i}=0$. The fitness is then calculated as in the GA.
Mutation is handled differently in QIEA using a method called the quantum gate. The gate applies a rotation to the qubit vector. This updates the α and β based on the fitness of the current best solution found so far. This process drives the qubit toward a better solution by increasing the probability of measured states.
You can see that it would be relatively easy to extend the knapsack QIEA model to more than two states for elements of the solutions. Leaving aside the quantum terminology, QIEA is a way to allow probabilistic models for each element of a solution.
QIEA Knapsack Program
The following code was partially generated by DeepSeek. I use LLMs to give me a general overview of a subject and sometimes to generate example code. I usually modify the code after studying it. In this case, I used DeepSeek's framework as a basis for this example.
To get started, we create a QIEAKnapsak class. The class in this case is a wrapper for a collection of functions. It's purpose is to organize the data values so that we don't have to pass around a large collection of values as parameters.
Class QIEAKnapsack:
def __init__(self,
weights: np.ndarray,
values: np.ndarray,
capacity: int,
rng: np.random._generator.Generator,
population_size: int=50,
max_generations: int=200) -> None:
"""
Initialization
Parameters
----------
weights : np.ndarray
weight of each object
values : np.ndarray
value of each object
capacity : int
how much weight can knapsack handle
rng : np.random._generator.Generator
random number generator
population_size : int, optional
size of evolutionary population by default 50
max_generations : int, optional
how many generations to run, by default 150
"""
self.weights = np.array(weights)
self.values = np.array(values)
self.capacity = capacity
self.pop_size = population_size
self.max_gens = max_generations
self.num_items = len(weights)
self.rng = rng
# Initialize quantum population with qubits [alpha, beta]
# Each individual is a probability distribution over all items
self.quantum_pop = np.ones((self.pop_size, self.num_items, 2)) * (1/np.sqrt(2))
# Store best solution
self.best_solution = self.quantum_pop[0].copy
self.best_fitness = 0
self.fitness_history = []
To perform a measurement on an individual solution, we sample 0 or 1 from quantum states returning a vector 0 or 1 values.
def observe_quantum_individual(self, quantum_individual: np.ndarray) -> np.ndarray: """ Make a measurement of a quantum object Parameters ---------- quantum_individual : np.ndarray a member of the quantum population Returns ------- np.ndarray an observation, i.e. a set bits representing a solution """ # Observe a quantum individual to generate a classical binary solution classical_solution = np.zeros(self.num_items, dtype=int) for i in range(self.num_items): alpha, beta = quantum_individual[i] prob_1 = beta**2 # Probability of observing 1 if self.rng.random() < prob_1: classical_solution[i] = 1 return classical_solution
Once we have a classical solution, we can calculate the fitness of an individual solution. If the total weight calculated by the solution, we penalize the fitness value. In such cases, we remove items until the total weight is less than the knapsack capacity.
def calculate_fitness(self, solution: np.ndarray) -> int: """ Calculate fitness of solution Apply penalty if capacity is exceeded Parameters ---------- solution : np.ndarray a classical individual Returns ------- int the fitness value """ total_weight = np.dot(solution, self.weights) total_value = np.dot(solution, self.values) # Penalty for exceeding capacity if total_weight > self.capacity: # Excess ratio penalty excess_ratio = (total_weight - self.capacity) / self.capacity penalty = 0.5 * total_value * excess_ratio total_value -= penalty return max(total_value, 0) # Ensure non-negative fitness
def repair_solution(self, solution: np.ndarray) -> np.ndarray: """ If solution is over capacity until it is valid Parameters ---------- solution : np.ndarray a classical individuals Returns ------- np.ndarray a valid solution """ # pair infeasible solution using greedy approach total_weight = np.dot(solution, self.weights) if total_weight <= self.capacity: return solution # Create list of included items with their value-to-weight ratio included_items = [] for i in range(self.num_items): if solution[i] == 1: ratio = self.values[i] / self.weights[i] included_items.append((i, ratio, self.weights[i])) # Sort by value-to-weight ratio (ascending - remove worst first) included_items.sort(key=lambda x: x[1]) # Remove items until capacity constraint is satisfied repaired_solution = solution.copy() current_weight = total_weight for item_idx, ratio, weight in included_items: if current_weight > self.capacity: repaired_solution[item_idx] = 0 current_weight -= weight else: break return repaired_solution
Updating and mutation is a two step operation. To update an individual, we rotate $\alpha $ and $\beta $ in the direction of the current solution. This is an analogy to the quantum gate.
For mutation, we flip $\alpha $ and $\beta $ with a small mutation probability.
def update_quantum_individual(self, quantum_individual: np.ndarray, best_classical: np.ndarray, theta: float=0.05*np.pi) -> np.ndarray: """ Update an quantum individual, rotate individual in drection of clalsical solution Parameters ---------- quantum_individual : np.ndarray an element of quantum populations best_classical : np.ndarray the most fit individual theta : float, optional rotation value, by default 0.05*np.pi Returns ------- np.ndarray updated quantum individual """ # update quantum individual using rotation gate updated_individual = quantum_individual.copy() for i in range(self.num_items): alpha, beta = quantum_individual[i] # Determine rotation direction based on best solution if best_classical[i] == 1: # Rotate towards |1⟩ state new_alpha = alpha * np.cos(theta) - beta * np.sin(theta) new_beta = alpha * np.sin(theta) + beta * np.cos(theta) else: # Rotate towards |0⟩ state new_alpha = alpha * np.cos(theta) + beta * np.sin(theta) new_beta = -alpha * np.sin(theta) + beta * np.cos(theta) # Normalize (should already be normalized due to rotation, but for numerical stability) norm = np.sqrt(new_alpha**2 + new_beta**2) updated_individual[i] = [new_alpha/norm, new_beta/norm] return updated_individual def mutate_quantum_individual(self, quantum_individual: np.ndarray, mutation_rate: float=0.01) -> np.ndarray: """ Mutate an indivdual solution by flipping bits Parameters ---------- quantum_individual : np.ndarray an element of quantum populations mutation_rate : float, optional mutation value, by default 0.01 Returns ------- np.ndarray mutated individual """ # Apply mutation to quantum individual by randomly flipping qubit states mutated = quantum_individual.copy() for i in range(self.num_items): if self.rng.random() < mutation_rate: # Swap alpha and beta (equivalent to bit flip in classical space) alpha, beta = mutated[i] mutated[i] = [beta, alpha] return mutated
Finally we have a function to run the evolutionary algorithm.
def run(self) -> tuple[np.ndarray, int]: """ Execute the QIEA algorithm Returns ------- tuple[np.ndarray, int] best classical solution, fitness of best solution """ # Execute the QIEA algorithm print("Starting Quantum-Inspired Evolutionary Algorithm for 0/1 Knapsack") print(f"Items: {self.num_items}, Capacity: {self.capacity}") print("-" * 60) for generation in range(self.max_gens): # Step 1: Observation - Generate classical population from quantum population classical_pop = [] for i in range(self.pop_size): classical_sol = self.observe_quantum_individual(self.quantum_pop[i]) classical_sol = self.repair_solution(classical_sol) classical_pop.append(classical_sol) # Step 2: Evaluation fitness_scores = [self.calculate_fitness(sol) for sol in classical_pop] # Find best solution in current generation current_best_idx = np.argmax(fitness_scores) current_best_fitness = fitness_scores[current_best_idx] current_best_solution = classical_pop[current_best_idx] # Update global best if current_best_fitness > self.best_fitness: self.best_fitness = current_best_fitness self.best_solution = current_best_solution.copy() self.fitness_history.append(self.best_fitness) # Step 3: Update quantum population for i in range(self.pop_size): # Update towards best solution with some probability if self.rng.random() < 0.8: # Learning rate self.quantum_pop[i] = self.update_quantum_individual( self.quantum_pop[i], self.best_solution ) # Apply mutation self.quantum_pop[i] = self.mutate_quantum_individual(self.quantum_pop[i]) # Progress reporting if generation % 20 == 0 or generation == self.max_gens - 1: weight = np.dot(self.best_solution, self.weights) value = np.dot(self.best_solution, self.values) print(f"Generation {generation:3d}: Best Value = {value:.1f}, " f"Weight = {weight:.1f}/{self.capacity}") return self.best_solution, self.best_fitness
When we run the code with the same data as the GA, we obtain
Testing QIEA on 0/1 Knapsack Problem ================================================== Starting Quantum-Inspired Evolutionary Algorithm for 0/1 Knapsack Items: 50, Capacity: 813 ------------------------------------------------------------ Generation 0: Best Value = 1724.0, Weight = 796.0/813 Generation 20: Best Value = 2257.0, Weight = 788.0/813 Generation 40: Best Value = 2257.0, Weight = 788.0/813 Generation 60: Best Value = 2257.0, Weight = 788.0/813 Generation 80: Best Value = 2257.0, Weight = 788.0/813 Generation 99: Best Value = 2257.0, Weight = 788.0/813 ============================================================ QIEA SOLUTION ============================================================ Total Value: 2257.00 Total Weight: 788.00 / 813 Capacity Utilization: 96.9% Selected Items: 35/50 Selected Items Details: Item 1: Weight=5, Value=78, Ratio=15.60 Item 4: Weight=22, Value=52, Ratio=2.36 Item 5: Weight=22, Value=54, Ratio=2.45 Item 7: Weight=5, Value=59, Ratio=11.80 Item 9: Weight=10, Value=76, Ratio=7.60 Item 10: Weight=5, Value=71, Ratio=14.20 Item 11: Weight=26, Value=93, Ratio=3.58 Item 12: Weight=48, Value=77, Ratio=1.60 Item 13: Weight=37, Value=42, Ratio=1.14 Item 14: Weight=38, Value=97, Ratio=2.55 Item 15: Weight=36, Value=46, Ratio=1.28 Item 16: Weight=39, Value=39, Ratio=1.00 Item 17: Weight=26, Value=91, Ratio=3.50 Item 18: Weight=7, Value=43, Ratio=6.14 Item 20: Weight=23, Value=52, Ratio=2.26 Item 21: Weight=25, Value=81, Ratio=3.24 Item 22: Weight=19, Value=27, Ratio=1.42 Item 25: Weight=39, Value=71, Ratio=1.82 Item 27: Weight=20, Value=39, Ratio=1.95 Item 29: Weight=27, Value=60, Ratio=2.22 Item 30: Weight=22, Value=70, Ratio=3.18 Item 31: Weight=23, Value=94, Ratio=4.09 Item 32: Weight=12, Value=49, Ratio=4.08 Item 33: Weight=5, Value=24, Ratio=4.80 Item 34: Weight=28, Value=84, Ratio=3.00 Item 36: Weight=4, Value=73, Ratio=18.25 Item 39: Weight=14, Value=79, Ratio=5.64 Item 40: Weight=31, Value=84, Ratio=2.71 Item 41: Weight=9, Value=49, Ratio=5.44 Item 42: Weight=38, Value=82, Ratio=2.16 Item 43: Weight=35, Value=85, Ratio=2.43 Item 44: Weight=18, Value=44, Ratio=2.44 Item 45: Weight=4, Value=90, Ratio=22.50 Item 47: Weight=22, Value=31, Ratio=1.41 Item 48: Weight=44, Value=71, Ratio=1.61
The code for the GA and the QIEA programs is available on GitHub.

No comments:
Post a Comment