Gains in strength at such small deformation comparable to the material length scales have been reported for many tests on metallic materials. Numerous experiments (micro- and nano- indentationtests (see e.g.
[Atkinson, 1995],
[Ma and Clarke, 1995],
[Nix, 1989] and [Stelmashenko et al., 1993]); twisting of copper wires of micron diameters by Fleck et al. (1994) micro-bend tests by Haque and Saif (2003))
have shown significant size-dependent effects when the material and
deformation length scales are of the same order at micron and submicron
levels. Finite element simulations employing classical plasticity
theories are unable to capture these size-dependent effects. The size
effects cannot be simulated via classical plasticity theories as no
material length scale is introduced. Fleck et al. (1994)
proposed the theory of strain gradient plasticity requiring additional
higher-order stress and consequently leading to significantly greater
formulation and computational efforts.
[Gao et al., 1999] and [Huang et al., 2000] proposed the mechanism-based strain gradient (MSG) plasticity guided by the Taylor dislocation concept to model the indentation size effect. Huang et al. (2004)
further developed the conventional mechanism-based strain gradient
(CMSG) plasticity theory confining the presence of the strain gradient
plasticity in the material constitutive equation without involving the
higher-order stress components. Adopting this approach,
[Swaddiwudhipong et al., 2005] and [Swaddiwudhipong et al., 2006] formulated C0 continuity solid, plane and axisymmetric finite elements incorporating strain gradient plasticity to simulate various indentationtests
and other physical problems involving deformation at micron and
submicron levels. Alternatively, the strain gradient plasticity may also
be determined via the differences in numerical values of the plastic at
various locations. The formulation was derived based on the classical
continuum plasticity framework taking into consideration Taylor
dislocation model. Higher order variables and consequently higher-order
continuity conditions are not required and the direct application of
conventional plasticity algorithms in finite element modelling is
applicable.
The ISE has also been studied using spherical indenters (
[Swadener et al., 2002],
[Qu et al., 2004],
[Qu et al., 2005],
[Spary et al., 2006] and [Hou et al., 2008]). Lim and his co-workers (1999)
have reported that the size effect increases with decreasing indenter
radius as observed in polycrystalline and single crystal oxygen free
copper. Swadener et al. (2002) have proposed that the size effects observed in conical indentation can be related to those of spherical indentation using the contact radius. They found that the size effect is a function of the indentation
depth for sharp indenter tips (e.g. conical and Berkovich) and the
indenter tip radius for a spherical indenter depending on the expression
of the average geometrically necessary dislocation density. Qu et al. (2004) implemented CMSG in order to study the ISE when indentation depths approaching the nanometer scale. Qu et al. (2005) reported the size effect in the spherical indentation of iridium. They proposed an analytical spherical indentation model to predict the indentation hardness of indented materials.
The objective of the study is to verify that the CMSG model incorporating the strain gradient effect be able to simulate indentation size effects observed in the experimental results of pure metals and metallic alloys, especially in copper, Al7075 and nickel.
In
this study, the indenter was modelled as a rigid body while the target
as a deformable body. The penalty approach was employed to model the
contact problem between the indenter and the target. A constant value
Poisson’s ratio of 0.3 and a friction coefficient of 0.15 between the
contact surfaces were adopted for both simulated spherical and Berkovich
indentationtests. A
finer mesh was used near the contact region where high stress gradient
was expected and the element size was gradually coarser elsewhere.
(International Journal of Solids and Structures, Volume 48, Issue 6, 15 March 2011, Pages 972,973)
5. Ant Colony Optimization(old) (optimization method)
Ant
Colony Optimization (ACO) is one of the population based meta-heuristic
optimization methods for finding approximate solutions to discrete
optimization problems. It has been derived from the foraging behavior or
stigmergic communication- a form of indirect communication – natural ant colonies. ACO is basically a solution
– construction heuristic. The procedure for solution construction is
based on mutual interactions among elementary agents, called artificial
ants. Any discrete optimization problem can be formulated as comprising
of components derived from the domain. A solution to this problem is a
certain combination of these components. The presence and absence of a
component in a solution can be encoded by using a binary variable; where
a value 1 means that the corresponding component is present in the
solution and a value of 0 means that the corresponding component is
absent. For example, the components of a minimum spanning tree problem
are the edges present in the graph. The solution to the minimum spanning
tree problem can be formulated as a string of binary variables
corresponding to the edges in the graph. A value of 1 represents the
corresponding edge being connected and a value of 0 represents the
corresponding edge being disconnected.While solving a discrete
optimization problem with ACO, the problem is formulated as a
construction graph. The construction graph is a completely connected
graph, where nodes in the graph represent the problem components and the
edges represent the transition between the components. Ants move on the
construction graph to generate a solution. They lay chemical
substance, called pheromone, on the edges between the nodes of the
graph, as they move along. The amount of pheromone deposited on the
edges is a function of the quality of the solution that is produced.
Ants’s solution construction consists of transitions from node to node
in a step-by-step manner. These transitions are determined by a
probabilistic selection rule, based on the value of pheromones deposited
on the edges between the nodes by other ants. So using the information
stored in pheromone intensity, ants traverse a path in the construction
graph. This paths is a solution to te discrete optimization problem.
Over a period of time, tha path that corresponds to the optimal solurion
fortt he optimization problem gets high pheromone deposition. Any ant
traversing the construction graph at this point will choose this path.
In addition to pheromone intensity, some problem-spesific local
heuristic are also used to guide the ants through the construction
graph.
ACO has been successfully applied to a large number of combinatorial
optimization problems, including travelling salesman problems; vehicle
routing problems; and quadratic assignment problem. ACO also has been
applied successfully to the scheduling problems, such as single machine
problems; flow shop problems; and graph coloring problems.
(Panighrahi, B.K., Computational Intelligence in Power Engineering, Springer, 2010, pg.31-32)
Ant Colony Optimization (new-better)
Antcolonyoptimization (ACO)
[36] is one of the most recent techniques for approximate
optimization. The inspiring source of ACO algorithms are real
antcolonies. More specifically, ACO is inspired by the
ants' foraging behavior. At the core of this behavior is the indirect communication between the
ants
by means of chemical pheromone trails, which enables them to find short
paths between their nest and food sources. This characteristic of real
antcolonies is exploited in ACO algorithms in order to solve, for example, discrete
optimization problems.
3
Depending
on the point of view, ACO algorithms may belong to different classes of
approximate algorithms. Seen from the artificial intelligence (AI)
perspective, ACO algorithms are one of the most successful strands of
swarm intelligence
[16] and
[17].
The goal of swarm intelligence is the design of intelligent multi-agent
systems by taking inspiration from the collective behavior of social
insects such as
ants, termites, bees, wasps,
and other animal societies such as flocks of birds or fish schools.
Examples of “swarm intelligent” algorithms other than ACO are those for
clustering and data mining inspired by
ants' cemetery building behavior
[55] and
[63], those for dynamic task allocation inspired by the behavior of wasp
colonies[22], and particle swarm
optimization[58].
Seen from the operations research (OR) perspective, ACO algorithms belong to the class of metaheuristics
[13],
[47] and
[56]. The term
metaheuristic, first introduced in
[46], derives from the composition of two Greek words.
Heuristic derives from the verb
heuriskein (
ευρισκειν) which means “to find”, while the suffix
meta means “beyond, in an upper level”. Before this term was widely adopted, metaheuristics were often called
modern heuristics[81].
In addition to ACO, other algorithms such as evolutionary computation,
iterated local search, simulated annealing, and tabu search, are often
regarded as metaheuristics. For books and surveys on metaheuristics see
[13],
[47],
[56] and
[81].
This review is organized as follows. In Section
2 we outline the origins of ACO algorithms. In particular, we present the foraging behavior of real
antcolonies and show how this behavior can be transfered into a technical algorithm for discrete
optimization. In Section
3
we provide a description of the ACO metaheuristic in more general
terms, outline some of the most successful ACO variants nowadays, and
list some representative examples of ACO applications. In Section
4, we discuss some important theoretical results. In Section
5, how ACO algorithms can be adapted to continuous
optimization. Finally, Section
6
will give examples of a recent successful strand of ACO research,
namely the hybridization of ACO algorithms with more classical AI and OR
methods. In Section
7 we offer conclusions and an outlook to the future.
The origins of antcolonyoptimization:
Marco Dorigo and colleagues introduced the first ACO algorithms in the early 1990's
[30],
[34] and
[35]. The development of these algorithms was inspired by the observation of
antcolonies.
Ants are social insects. They live in
colonies and their behavior is governed by the goal of
colony survival rather than being focused on the survival of individuals. The behavior that provided the inspiration for ACO is the
ants' foraging behavior, and in particular, how
ants can find shortest paths between food sources and their nest. When searching for food,
ants initially explore the area surrounding their nest in a random manner. While moving,
ants leave a chemical pheromone trail on the ground.
Ants
can smell pheromone. When choosing their way, they tend to choose, in
probability, paths marked by strong pheromone concentrations. As soon as
an
ant finds a food source, it evaluates the
quantity and the quality of the food and carries some of it back to the
nest. During the return trip, the quantity of pheromone that an
ant leaves on the ground may depend on the quantity and quality of the food. The pheromone trails will guide other
ants to the food source. It has been shown in
[27] that the indirect communication between the
ants via pheromone trails—known as
stigmergy[49]—enables them to find shortest paths between their nest and food sources. This is explained in an idealized setting in
Fig. 1.
Fig. 1. An experimental setting that demonstrates the shortest path finding capability of antcolonies. Between the ants'
nest and the only food source exist two paths of different lengths. In
the four graphics, the pheromone trails are shown as dashed lines whose
thickness indicates the trails' strength.
As a first step towards an algorithm for discrete
optimization we present in the following a discretized and simplified model of the phenomenon explained in
Fig. 1. After presenting the model we will outline the differences between the model and the behavior of real
ants. Our model consists of a graph
G=(V,E), where
V consists of two nodes, namely
vs (representing the nest of the
ants), and
vd (representing the food source). Furthermore,
E consists of two links, namely
e1 and
e2, between
vs and
vd. To
e1 we assign a length of
l1, and to
e2 a length of
l2 such that
l2>l1. In other words,
e1 represents the short path between
vs and
vd, and
e2 represents the long path. Real
ants
deposit pheromone on the paths on which they move. Thus, the chemical
pheromone trails are modeled as follows. We introduce an artificial
pheromone value
τi for each of the two links
ei,
i=1,2. Such a value indicates the strength of the pheromone trail on the corresponding path. Finally, we introduce
na artificial
ants. Each
ant behaves as follows: Starting from
vs (i.e., the nest), an
ant chooses with probability
between path
e1 and path
e2 for reaching the food source
vd. Obviously, if
τ1>τ2, the probability of choosing
e1 is higher, and vice versa. For returning from
vd to
vs, an
ant uses the same path as it chose to reach
vd,
4 and it changes the artificial pheromone value associated to the used edge. More in detail, having chosen edge
ei an
ant changes the artificial pheromone value
τi as follows:
where the positive constant
Q
is a parameter of the model. In other words, the amount of artificial
pheromone that is added depends on the length of the chosen path: the
shorter the path, the higher the amount of added pheromone.
The foraging of an
antcolony is in this model iteratively simulated as follows: At each step (or iteration) all the
ants are initially placed in node
vs. Then, each
ant moves from
vs to
vd as outlined above. As mentioned in the caption of
Fig. 1(d),
in nature the deposited pheromone is subject to an evaporation over
time. We simulate this pheromone evaporation in the artificial model as
follows:
The parameter
ρ∈(0,1] is a parameter that regulates the pheromone evaporation. Finally, all
ants conduct their return trip and reinforce their chosen path as outlined above.
We implemented this system and conducted simulations with the following settings:
l1=1,
l2=2,
Q=1.
The two pheromone values were initialized to 0.5 each. Note that in our
artificial system we cannot start with artificial pheromone values of
0. This would lead to a division by 0 in Eq.
(1). The results of our simulations are shown in
Fig. 2. They clearly show that over time the artificial
colony of
ants converges to the short path, i.e., after some time all
ants use the short path. In the case of 10
ants (i.e.,
na=10,
Fig. 2(a)) the random fluctuations are bigger than in the case of 100
ants (
Fig. 2(b)). This indicates that the shortest path finding capability of
antcolonies results from a cooperation between the
ants.
Fig. 2. Results of 100 independent runs (error bars show the standard deviation for each 5th iteration). The x-axis shows the iterations, and the y-axis the percentage of the ants using the short path.
The main differences between the behavior of the real ants and the behavior of the artificial ants in our model are as follows:
- (1)
While real ants move in their environment in an asynchronous way, the artificial ants are synchronized, i.e., at each iteration of the simulated system, each of the artificial ants moves from the nest to the food source and follows the same path back.
- (2)
While real ants leave pheromone on the ground whenever they move, artificial ants only deposit artificial pheromone on their way back to the nest.
- (3)
The foraging behavior of real ants
is based on an implicit evaluation of a solution (i.e., a path from the
nest to the food source). By implicit solution evaluation we mean the
fact that shorter paths will be completed earlier than longer ones, and
therefore they will receive pheromone reinforcement more quickly. In
contrast, the artificial ants evaluate a
solution with respect to some quality measure which is used to determine
the strength of the pheromone reinforcement that the ants perform during their return trip to the nest.
(Physics of Life Reviews, Volume 2, Issue 4, December 2005, Pages 354–357)