Monday, May 14, 2012

Ürfet Demirkan 503111315 11. Week Unanswered Words


Multiple-Ram Presses (Forging Equipment)

Hollow, flashless forgings that are suitable for use in the manufacture of valve bodies, hydraulic cylinders, seamless tubes, and a variety of pressure vessels can be produced in a hydraulic press with multiple rams. The rams converge on the workpiece in vertical and horizontal planes, alternately or in combination, and fill the die by displacement of metal outward from a central cavity developed by one or more of the punches. Figure 14 illustrates the multiple-ram principle, with central displacement of metal proceeding from the vertical and horizontal planes.


 


Fig. 14


Examples of multiple-ram forgings. Displacement of metal can take place from vertical, horizontal, and combined vertical and horizontal planes. Dimensions given in inches.


Piercing holes in a forging at an angle to the normal direction of forging force can result in considerable material savings, as well as savings in the machining time required to generate such holes.

In addition to having the forging versatility provided by multiple rams, these presses can be used for forward or reverseextrusion. Elimination of flash at the parting line is a major factor in decreasing stress-corrosion cracking in forging alloys susceptible to this type of failure, and the multidirectional hot working that is characteristic of processing in these presses ecreases the adverse directional effects on mechanical properties.

(ASM Handbook, Volume 1 Properties and Selection Irons, Steels, and High-Performance Alloys, P 43)


Contour Forging (Forging Processes)

Open-die contour or form forging requiring the use of dedicated dies has been successfully accomplished for carbon, alloy, and stainless steels as well as for superalloys. Contour forging can be advantageous under such circumstances as the following:

· Enhancement of grain flow at specific locations, when demanded by product application
· Reduction of the quantity of starting material; this is especially critical when using expensive materials
such as stainless steels and superalloys
· Reduction of machining costs; this is critical when machinability or excessive material removal are
factors

Open-die contour forging may be a requirement, as in the case of grain flow, or it may be an option, as in the case of material and machining cost savings. The material and machining cost savings typically outweighs the forging tooling costs.



(ASM Handbook, Volume 1 Properties and Selection Irons, Steels, and High-Performance Alloys, P 132)






Turbine Wheel Forging  (Forging Processes)




Turbine wheels, which are commonly 2.54 m (100 in.) in diameter, are forged by first upsetting a block of steel and then contour forging to provide the thick hub and thin rim sections This is done using a shaped (contoured) bottom die, which supports the entire workpiece, and a shaped partial top (contoured swing) die. Successive strokes are taken with the top die as it is indexed around the vertical centerline of the press. The partial top die minimizes the force required to deform the metal, yet produces the desired forge envelope.

(ASM Handbook, Volume 1 Properties and Selection Irons, Steels, and High-Performance Alloys, P 132)

Hot Swaging (Forging Processes)
Hot swaging is used for metals that are not ductile enough to be swaged at room temperature or for greater reduction per pass than is possible by cold swaging. The tensile strength of most metals decreases with increasing temperature; the amount of decrease varies widely with-different metals and alloys. The tensile strength of carbon steels at 540 °C (1000 °F) is approximately one-half the room-temperature tensile strength; at 760 °C (1400 °F), about one-fourth the room temperature strength; and at 980 °C (1800 °F), about one-tenth the room-temperature strength. In practice, reductions greater than those indicated in Table 1 are sometimes possible by cold swaging without intentionally heating the work metal, because sufficient heat is generated during swaging to cause a substantial decrease in strength and increase in the ductility of the work metal.

The decrease in strength at elevated temperature does not make possible unlimited reductions at high temperatures. Because of the design and capabilities of swaging machines, the work metal must be strong enough to permit feeding of the workpiece into the machine. When the work metal has lost so much of its strength that it bends rather than feeds in a straight line, chopper dies must be used. This type of die limits the reduction in area to 25% regardless of work metal ductility. The temperature to which a work metal is heated for swaging depends on the material being swaged and on the desired reduction per pass.

(ASM Handbook, Volume 1 Properties and Selection Irons, Steels, and High-Performance Alloys, P 304)


Sunday, May 13, 2012

Melkan Çelik-503041311-10th week insufficient words


Mean Time Between Failure(Quality management term)
There is no previous definition.

MTBF stands for mean operating time between failures (wrongly mentioned as mean time between failures throughout the literature) and is used as a reliability measure for repairable systems. In British Standard (BS 3527) MTBF is defined as follows:
For a stated period in the life of a functional unit, the mean value of the lengths of time between consecutive failures under stated condition.
MTBF is extremely difficult to predict since it depends on several factors such as operating conditions, maintenance and repair effectiveness etc. In fact, it is very rarely predicted with an acceptable accuracy.
Charesteristics of MTBF:
1.      The value of MTBF is equal to mean time to failure (MTTF) if after each repair the system is recovered to as good as new.
2.      MTBF=1/λ for exponential distribution, where λ is the scale parameter (also the hazard function).
Applications of MTBF:
1.      For a repairable system, MTBF is the average time in service between failures. Note that, this does not include the time spent at repair facility by the system.
2.      MTBF is used to predict steady-state availibility measures like inherent and operational availability.
(Dinesh Kumar,U. Dinesh Kumar,  Reliability And Six Sigma, pages 95-98)

Saturday, May 12, 2012

503111312 - Selçuk Keser - 10th WEEK'S UNANSWERED WORD


GAS WARMERS


(OLD DEFINITION)

On occasion, the gas in cylinders is withdrawn so fast that the regulator could ice up because of the change in temperature.İf this occurs, an electrically heated gas warmer is available to be installed in-line, and this warmer would heat the gas out of the cylinder before it reached the regulator.The rule of thumb is to consider a warmer if the use of gas exceeds 35 acfm.The actual figure should be based on experience with the specific type of gas being used.Ask the supplier what his or her experience has been.Carbon dioxide is a particular problem.
Facility piping systems handbook 2nd edition Michael Frnkel p.14.79)



(NEW DEFINITION)


Ruttan's Air Warmer is a double box-stove, which heats by radiation, and also by air which is brought from without, warmed by passing between the inner and outer plates, and delivered into the apartment. The inventor, however, was so intent upon a "system of ventilation" which implied the adaptation of the house to it, that he failed to make his stoves readily available for ordinary use.

(The Popular Science Monthly - Publisher: Bonnier Corporation - "Nov1897-Apr1880", P148)


Puan verilmeyen kelimer !

Arkadaşlar puanı eksik olan kelimeleri bu başlık altına hocanın söylediği format gibi yazabilirseniz bende hoca e-posta ile iletirim

Yüksek Lisans Öğrencisi / Hafta / Tanımlanmayan kelime(ler) / Lisans Öğrencisi

Thursday, May 10, 2012

Metin Atmaca 030080007 11th week part 2


3. Acceptance quality level (AQL) (Management):

Previous Definition:

The acceptance quality level (AQL) commonly is defined as the level at which there is a 95% acceptance probability for the lot. This percentage indicates to the manufacturer that 5% of the parts in the lot may be rejected by the consumer (producer's risk). Likewise, the consumer knows that 95% of the parts are acceptable (consumer's risk).

(Kalpakjian S., Schmid S.R., Manufacturing engineering and technology, Ed. 5th, p. 1131)

New Definition (Better):

When an acceptance-sampling plan is designed, management specifies a quality standard commonly referred to as the acceptable quality level (AQL). The AQL reflects the consumer’s willingness to accept lots with a small proportion of defective items. The AQL is the fraction of defective items in a lot that is deemed acceptable. For example, the AQL might be two defective items in a lot of 500, or 0.004. The AQL may be determined by management to be the level that is generally acceptable in the marketplace and will not result in a loss of customers. Or, it may be dictated by an individual customer as the quality level it will accept. In other words, the AQL is negotiated.

The probability of rejecting a production lot that has an acceptable quality level is referred to as the producer’s risk, commonly designated by the Greek symbol α. In statistical jargon, α is the probability of committing a type I error.
There will be instances in which the sample will not accurately reflect the quality of a lot and a lot that does not meet the AQL will pass on to the customer. Although the customer expects to receive some of these lots, there is a limit to the number of defective items the customer will accept. This upper limit is known as the lot tolerance percent defective, or LTPD (LTPD is also generally negotiated between the producer and consumer). The probability of accepting a lot in which the fraction of defective items exceeds the LTPD is referred to as the consumer’s risk, designated by the Greek symbol β. In statistical jargon, β is the probability of committing a type II error.

(Taylor, B.W., Russel, R.S, Operations Management, p.149)



4. Design-for-Manufacturability programs (Manufacturing Program):

There is No Previous Definition.

New Definition:

General-purpose DFM programs include modules for assembly, stamping, and other processes as well as machining. Because desirable machining practices vary depending on the volume of production and the machine tools available, it is difficult to write a widely applicable general-purpose design-for-machining module. Some large companies have proprietary in-house codes used to apply design-for-machining rules in a manner tailored to their business operations.

A DFM program typically has an input module and an analysis module. Data input is not as automated as in CAPP programs; rather than reading required geometric information from a CAD file, part features and dimensions must generally be input manually according to some format and classification scheme. This is partly because DFM programs are intended to be applied at an earlier stage of the design process (when no complete CAD model of the part may be available), and partly because additional subjective information, such as the perceived relative machinability of various materials or the relative penalty associated with given undesirable features, is often required.

Once data are input, the analysis module is used to compute a relative machinability score for the design as entered. The algorithm used to compute the score varies from program to program, but in general the score depends on the complexity of the design and the penalties associated with difficult-to-machine materials or features. In DFM workshops, some rough estimate of the machining cost can also be computed (e.g., using a spreadsheet) for the given design. The output of the program is a detailed breakdown of components of the score due to individual features, which often clearly identifies the feature(s) most responsible for complexity or excessive cost.

Unlike CAPP programs, DFM programs are used for comparison rather than formal optimization. Usually several design alternatives are compared to a benchmark design, and based on the DFM score the best design is chosen and refined. For complex parts, the process may be repeated at various stages of the design (e.g., at an early stage and before fabrication of the first prototype). Design for manufacturability programs can be used for parts manufactured on either CNC or dedicated production equipment. They are well suited for designing complex parts for mass production and are currently more widely used than CAPP programs in these applications.

(ASM Handbook Vol. 20 Materials Selection and Design, p. 1797)

Wednesday, May 9, 2012

Mehmet Can ÇAPAR 030070131 11th week definitions (Bonus Week) Part 2



3-Sctratch Hardness Test (Group:hardness test)

There is no older definition

(new definiton)
     One of the oldest hardness testing methods is the hardness scale according to Mohs, which is based on a series of minerals using the principle: “Who scratches whom?” The scale according to Mohs provided comparison values (see table 1). Thus, the Mohs hardness 5 (apatite) is, for example, harder than the Mohs Hardness 4 (fluorite)
     The advantages of scratch hardness testing is:
-It is an easy-to-handle method.
     Disadvantages include:
-It is appliciable to metallic materials to a limited degree only.
-The differentiation of the hardness values is inadequate for metals.

(Konrad Herrman, Hardness Testing: Principles and Applications, pg:95)




4-Elastoviscoplasticity (material behaviour)
(old definiton)
The elasto viskoplastic rheological model consisits of a slider and a dashpot in parallel and this combination is in series whit a Hookean spring. Fort he uniaxial model the behaviour is purely elastic until the stress exceeds the yield stress . Onece in the plastic region the viscous component becomes active and for raidly applied loads the stress can exceed the pşastic limit. If unloading takes place form the yielded state the strain path followed is different form that of loading and permanent deformation takes place. Elastoviskoplastic behaviour is also loading path dependent since it has been demonstrated by experiments that if two loading paths reach the same point on the yield surface by different routes .

(The Finite element method in heat transfer analysis ; Roland Wynne Lewis ; pg.198 , 1996)

(new definiton) (better)
The deformation of solid materials is usually purelly elastic when the stresses are below a certain critical level, called the yield stres. When the stresses are above this threshold, a combination of elastic and plastic deformation ocur, where the latter type of deformation is recognized by being permanent.

(Hans Petter Langtangen, Computational Partial Differential Equations: Numerical Methods and Diffpack, pg: 522)

Berk Korucu - 030080104 - 11th Week

1) Microhardness Test (Hardness Test)


There is no previous definition.



Current practice in the United States divides hardness testing into two categories: macrohardness and microhardness. Macrohardness refers to testing with applied loads on the indenter of more than 1 kg and covers, for example, the testing of tools, dies, and sheet material in the heavier gages. In microhardness testing, applied loads are 1 kg and below, and material being tested is very thin (down to 0.0125 mm, or 0.0005 in.). Applications include extremely small parts, thin superficially hardened parts, plated surfaces, and individual constituents of materials.


(H. Chandler, Hardness Testing  2nd Edition, p.3)


2) Special Indentation Test (Hardness Test)


There is no previous definition.



Special Indentation Tests: Modifications of this type test have been developed, and a few have had some commercial acceptance. Perhaps the best example is the Monotron test. This instrument used a 0.75 mm (0.03 in.) hemispherical diamond indenter. The Monotron principle was the reverse of the more conventional indentation testers such as the Brinell and Rockwell. Instead of using a prescribed force and measuring the depth or area, the Monotron indenter was forced into the material being tested to a given depth, and the hardness was determined by the force required to achieve this depth of penetration. This instrument was developed primarily for evaluating the true hardness of nitrided cases, which were, at one time, difficult to evaluate
accurately. The Monotron has not been manufactured for many years, and it is doubtful whether any are still in use.


(H. Chandler, Hardness Testing  2nd Edition, p.10)


3) Machining Costs (Accounting)


There is no previous definition.



The total cost of a machining operation includes contributions from some or all of the following components:

· Raw material costs: The cost of unmachined stock, which may be in the form of a standard bar or slab, casting, or forged blank · Labor costs: The wages for the machine operator, usually measured in units of standard hours · Setup costs: The cost of special fixtures or tool setups and the wages paid to setup personnel · Tooling costs: The cost of perishable tooling, including inventory, and any special tooling required for the operation · Equipment costs: The cost of the machine tools, including required capital expenditures, facilities costs, and machine depreciation · Scrap and rework costs: The cost of repairing or disposing of finished or partially finished parts of unacceptable quality · Programming costs: The cost of writing numerical control (NC) programs to generate the required toolpaths · Engineering costs: salaries paid to engineers for process design, validation, and other overhead functions.


(P. Andersen et al. , ASM Handbook vol 20 Materials Selection And Design, p.1771)


4) Early Cost Estimating (Accounting)


There is no previous definition.



The problem of estimating part and tooling costs before the part has been fully detailed is
discussed using machining as an example because this is one of the most common shape-forming processes. Several conventional cost estimating methods for machining are available both in handbook form, such as the Machining Data Handbook (Ref 18) and the AM Cost Estimator (Ref 19) and in software form. However, all of these methods are meant to be applied after the part has been detailed and its production has been planned, and they are not tailored for use by a designer. During the early stages of design, the designer will not wish to specify, for example, all the work-holding devices and tools that might be needed--a detailed design will not yet be available. Indeed, a final decision even on the work material might not have been made.


For early cost estimating an important assumption has to be made. The designer should be able to expect that, when the design is finalized, care will have been taken to avoid unnecessary manufacturing expense at the detail-design stage and that manufacturing will take place under efficient conditions.


To illustrate how such an assumption can help in providing reasonable estimates, consider the effect of the metal-removal rate on grinding costs, as shown in Fig. 6. These cost curves indicate that as the removal rate is increased the cost of grinding-wheel wear increases in proportion. At the same time, the cost of grinding decreases because the grinding cycle
is shortened; in fact, the grinding costs are inversely proportional to the removal rate.


(P. Andersen et al. , ASM Handbook vol 20 Materials Selection And Design, p.1558)

Tuesday, May 8, 2012

Proje Notları

Arkadaşlar hocadan proje notlarını aldım.Bana mail atarsanız size de gönderebilirim.
ufukcivelek@ymail.com

Serdar Yüksel 030070129 11th week words

1. Delayed Fracture (Hydrogen Embrittlement) (new)   (material properties)


          THERE IS NO OLDER DEFINITION


Internal Hydrogen Embrittlement (IHE)Internal hydrogen embrittlement (IHE) is caused by
hydrogen, contained (pre-existing) in the material, that acts
in combination with extant stress, residual and applied.
When a steel part fractures while sitting in air on a shelf,
with no externally applied stress, this process of timedelayed
fracture is caused by residual stress in a process
that is classically termed internal hydrogen embrittlement
(IHE). This behavior is usually associated with steels of
relatively high strength levels, such as those used in bolts or
landing gears. It is caused by the presence of residual hydrogen
and residual stresses from processing. These "causes"
initiate microcracking that proceeds eventually to rupture.
Applied tensile stresses in combination with residual stresses
from processing can also produce time-delayed fracture or
IHE, as in electrochemically plated tensile bolts. Commercially,
IHE is treated differently than environmental hydrogen
embrittlement (EHE), which includes any gaseous or
aqueous environment that promotes hydrogen charging of
the material. In both processes, the cracking is associated
with diffusion and localization of hydrogen near defects and
microcracks.

Environmental Hydrogen Embrittlement (EHE)
and Stress Corrosion Cracking (SCC)
When cracking occurs in an aqueous solution, a distinction
is made between two forms Environmentally Assisted
Cracking (EAC): stress corrosion cracking (SCC), and Environmental
Hydrogen Embrittlement (EHE). An impressed
cathodic current may often provide protection against SCC,
but steel that is "protected" against corrosion by this means
may be subject to EHE by cathodic hydrogen absorption/
adsorption. For an impressed anodic current, the converse
is true. Although this is a simplistic view of SCC, it is
sometimes useful. Nevertheless, it would be remiss to fail to
note that under anodic polarization, hydrogen production
might still result at a crack tip where its presence can be
most harmful. Bulk anodic polarization does not ensure
that the crack tip is polarized.
Differences and similarities between SCC and EHE are
further described by Latanision [19], who also discusses
hydrogen-induced phase transformations in the solid, the
observation of hydrogen evolution from the tip of a propagating
crack, fractography, crystal structure, and the influence
of solid-state impurities.
Thompson, Bernstein and Pressouyre [20,52] discuss the
significance of a number of metallurgical variables (including
chemical composition), microstructural components (precipitates,
grain size and shape, crystallographic texture), heat
treatment and its effects on these variables, and processing,
especially thermomechanical treatments for enhancement/
optimization of properties.
Treseder [21] indicates ranges of electrode potentials for
SCC in various environments and indicates that EHE may
be a factor in complex environments of sulfides, cyanides,
carbonates, and ammonia, and he notes that the term SCC
is attributed to various literature references where EHE is
the appropriate environmental influence.

(Hydrogen Damage, C. G. Interrante, L. Raymond, 2005, page: 325)




2. River Patterns (Cracking) (new)        (cracking mechanism)
 
           THERE IS NO OLDER DEFINITION

As a cleavage crack propagates through a crystal, it is most often
broken into a set of parallel cracks by interaction with imperfections
and microstructural features. Thus, gross crack propagation may be the
net result of the simultaneous propagation of individual crack segments
on sets of parallel planes. As the individual segments approach
one another (and possibly overlap), the segments join by fracture of
the connecting ligament, producing steps in the fracture surface. These
steps are generally observed to converge in the direction of local crack
propagation, either cancelling or reinforcing each other to produce the
familiar fiver patterns on individual facets.
Comprehensive investigations of the nature of steps observed in
cleavage fractures have been carried out by Berry [25] and Low [26].
These investigations established that the steps within a single cleavage
facet may be attributed to one or more of the following factors: intersection
of screw dislocations with the cleavage plane, secondary
cleavage, shear, secondary fracture on a twin-matrix interface or, in
the case where considerable overlap of two crack segments occurs,
deformation and necking-down of the interconnecting ligament. Since
a number of processes may be involved in the formation of cleavage
steps, facets exhibiting' wide variations in the appearance of steps and
river patterns are observed. Examples illustrating the extremes in appearance
of steps and river patterns are given in Fig. 7. 4 Steps formed
by secondary cleavage, shear, or twin-interface separation may be expected
to appear as distinctly resolvable subfacets (Fig. 7a), while
those associated with the formation of flaps or extensive local deformation
would appear as heavy lines with less resolvable detail (Fig. 7b).
Since steps and river patterns result from the division of a crack
into parallel segments, they may be expected to originate at regions of
mismatch (grain boundaries or subboundaries). Figure 8 illustrates a
case where new river patterns initiated at the point where the crack
crossed a boundary. In cases where the crack crosses a low-angle
boundary, an increase in step density occurs. Propagation of a cleavage
crack across a high-angle grain boundary usually requires the initiation
of a new crack in the second grain, resulting in the formation of new
sets of rivers or steps.

  
FIG. 8- River patterns on cleavage surface of ordered Fe-49Co-2 V. After Johnston
et al [14]. (x30,O00).

 (ELECTRON FRACTOGRAPHY, ASTM SPECIAL TECHNICAL PUBLICATION NO. 436, 1968, page: 42,43,46)



3. Arc Initiation(new) (arc properties) 


           THERE IS NO OLDER DEFINITION

 Arc initiation on metals subjected to a gas discharge is a problem important to controlled thermonuclear research. Previous work has suggested the importance of surface contamination to arc initiation under these conditions. In particular, second phase particles of high electrical resistivity, present as impurities in metals, were believed to be important arc initiators. By varying the temperature of a refractory metal the second phase content may be influenced in various ways. We have made an experimental investigation of arc initiation on molybdenum and other refractory metals as a function of temperature and heat treatment the results of which are consistent with these ideas. The interpretation suggests that arcing stops when certain second phases dissolve at high temperatures, that there is a critical size of particle for arc initiation which decreases with increasing specimen voltage and increasing ion current density and that it is possible to deplete the specimen of the arc initiating impurities by combined electrical treatment and heat treatment. The latter process can be carried to the stage where a temperature-independent non-arcing state is achieved at a specimen voltage of 2 kV and ion current density of 23 A/cm2 in a pulse of 200 μsec duration.
(Journal of Nuclear Materials, Volume 6, Issue 1, May–June 1962, Pages 35)


4. Indentation Test (new)           (test methodology)

         THERE IS NO OLDER DEFINITION

Material characterization using instrumented indentationtests has been extended to new applications as a result of technological advances in microelectronics and nano-technology. Not only hardness of material but also other mechanical properties such as Young’s modulus, yield strength and strain hardening exponent can be deduced from load–displacement indentation curves. Most of indentationtests have been intensively conducted at indentation depths from micron down to submicron levels to accommodate the needs of material properties of small volumes in the fields of MEMS and NEMS. In many of these applications, material properties are shown to be inconsistent with those provided by classical plasticity approach, exhibiting a strong size effect.
Gains in strength at such small deformation comparable to the material length scales have been reported for many tests on metallic materials. Numerous experiments (micro- and nano- indentationtests (see e.g. [Atkinson, 1995], [Ma and Clarke, 1995], [Nix, 1989] and [Stelmashenko et al., 1993]); twisting of copper wires of micron diameters by Fleck et al. (1994) micro-bend tests by Haque and Saif (2003)) have shown significant size-dependent effects when the material and deformation length scales are of the same order at micron and submicron levels. Finite element simulations employing classical plasticity theories are unable to capture these size-dependent effects. The size effects cannot be simulated via classical plasticity theories as no material length scale is introduced. Fleck et al. (1994) proposed the theory of strain gradient plasticity requiring additional higher-order stress and consequently leading to significantly greater formulation and computational efforts. [Gao et al., 1999] and [Huang et al., 2000] proposed the mechanism-based strain gradient (MSG) plasticity guided by the Taylor dislocation concept to model the indentation size effect. Huang et al. (2004) further developed the conventional mechanism-based strain gradient (CMSG) plasticity theory confining the presence of the strain gradient plasticity in the material constitutive equation without involving the higher-order stress components. Adopting this approach, [Swaddiwudhipong et al., 2005] and [Swaddiwudhipong et al., 2006] formulated C0 continuity solid, plane and axisymmetric finite elements incorporating strain gradient plasticity to simulate various indentationtests and other physical problems involving deformation at micron and submicron levels. Alternatively, the strain gradient plasticity may also be determined via the differences in numerical values of the plastic at various locations. The formulation was derived based on the classical continuum plasticity framework taking into consideration Taylor dislocation model. Higher order variables and consequently higher-order continuity conditions are not required and the direct application of conventional plasticity algorithms in finite element modelling is applicable.
Indentation size effect (ISE) has been studied extensively for both sharp and spherical indentationtests. The measured hardness of metallic materials increases with decreasing indentation depth for conical and Berkovich tips ( [McElhaney et al., 1998], [Nix and Gao, 1998], [Oliver and Pharr, 1992], [Stelmashenko et al., 1993] and [Tho et al., 2006]) and decreasing indenter radius for spherical indenters ( [Lim and Chaudhri, 1999], [Spary et al., 2006] and [Swadener et al., 2002]). Tho et al. (2006) performed experimental and numerical studies on copper and aluminium alloy Al7075 to investigate the size effect of Berkovich indentationtests. Their findings showed that the strength of indented materials increased when indentation depth reduced. Another ISE study was conducted by Zong et al. (2006) on fcc single crystals (Ni, Au and Ag). They presented nano- and micro-indentationtest results and theoretical study of indentation size effects for those crystalline materials. In their study, a three-sided pyramidal Berkovich tip was used as the indenter for nano-indentationtests while a Vicker diamond tip used for micro-indentationtests. They employed MSG theories proposed by [Gao et al., 1999], [Huang et al., 2000] and [Nix and Gao, 1998] to study the size dependence of the crystals at submicron levels. Strong size effects in the hardness were observed in all specimens.
The ISE has also been studied using spherical indenters ( [Swadener et al., 2002], [Qu et al., 2004], [Qu et al., 2005], [Spary et al., 2006] and [Hou et al., 2008]). Lim and his co-workers (1999) have reported that the size effect increases with decreasing indenter radius as observed in polycrystalline and single crystal oxygen free copper. Swadener et al. (2002) have proposed that the size effects observed in conical indentation can be related to those of spherical indentation using the contact radius. They found that the size effect is a function of the indentation depth for sharp indenter tips (e.g. conical and Berkovich) and the indenter tip radius for a spherical indenter depending on the expression of the average geometrically necessary dislocation density. Qu et al. (2004) implemented CMSG in order to study the ISE when indentation depths approaching the nanometer scale. Qu et al. (2005) reported the size effect in the spherical indentation of iridium. They proposed an analytical spherical indentation model to predict the indentation hardness of indented materials.
In the present study, spherical indentationtests were conducted on copper and aluminium alloy Al7075. Indentationtests were designed for various maximum indentation depths of 1200, 1800 and 2500 nm. The ISE for spherical indentationtest on copper and the aluminium alloy Al7075 reported here was done in the same framework adopted as for the Berkovich indentation size effect reported earlier by Tho et al. (2006). Another series of experimental study of size effects were conducted on nickel by using a three-sided pyramidal Berkovich tip for various depths of indentation ranging from 350 to 2500 nm.
The objective of the study is to verify that the CMSG model incorporating the strain gradient effect be able to simulate indentation size effects observed in the experimental results of pure metals and metallic alloys, especially in copper, Al7075 and nickel.

Numerical model:

Two-dimensional axisymmetric finite elements were adopted to model the target materials for simulated spherical indentationtests and three-dimensional elements for Berkovich indentationtests in the present study. The far field effect and convergence study were carried out. The former study showed that a domain size of 100 micron by 100 micron is sufficiently large to simulate indentationtests using a spherical indenter tip with radius of 5 microns. The domain size of 115.47 micron by 200 micron by 150 micron for length AH, HI and AJ respectively indicated in Fig. 1 is required to safely avoid the boundary effect near the indenter tip. Based on the convergence study, a total of 8328 CAX8 elements were used in the formulation of the spherical indentation of 5-micron radius tip. On the other hand, to simulate the indentation by Berkovich indenter possessing a threefold symmetry, only one-sixth of the target materials had to be considered in the 3D model. The finite element mesh for the target material comprising 5338 second-order solid elements (C3D20) was adopted in the latter.

Full-size image
Fig. 1. Typical Berkovich indentation model.
In this study, the indenter was modelled as a rigid body while the target as a deformable body. The penalty approach was employed to model the contact problem between the indenter and the target. A constant value Poisson’s ratio of 0.3 and a friction coefficient of 0.15 between the contact surfaces were adopted for both simulated spherical and Berkovich indentationtests. A finer mesh was used near the contact region where high stress gradient was expected and the element size was gradually coarser elsewhere.
(International Journal of Solids and Structures, Volume 48, Issue 6, 15 March 2011, Pages 972,973)


5. Ant Colony Optimization(old)    (optimization method)

Ant Colony Optimization (ACO) is one of the population based meta-heuristic optimization methods for finding approximate solutions to discrete optimization problems. It has been derived from the foraging behavior or stigmergic communication- a form of indirect communication – natural ant colonies. ACO is basically a solution – construction heuristic. The procedure for solution construction is based on mutual interactions among elementary agents, called artificial ants. Any discrete optimization problem can be formulated as comprising of components derived from the domain. A solution to this problem is a certain combination of these components. The presence and absence of a component in a solution can be encoded by using a binary variable; where a value 1 means that the corresponding component is present in the solution and a value of 0 means that the corresponding component is absent. For example, the components of a minimum spanning tree problem are the edges present in the graph. The solution to the minimum spanning tree problem can be formulated as a string of binary variables corresponding to the edges in the graph. A value of 1 represents the corresponding edge being connected and a value of 0 represents the corresponding edge being disconnected.While solving a discrete optimization problem with ACO, the problem is formulated as a construction graph. The construction graph is a completely connected graph, where nodes in the graph represent the problem components and the edges represent the transition between the components. Ants move on the construction graph to generate a solution. They lay chemical substance, called pheromone, on the edges between the nodes of the graph, as they move along. The amount of pheromone deposited on the edges is a function of the quality of the solution that is produced. Ants’s solution construction consists of transitions from node to node in a step-by-step manner. These transitions are determined by a probabilistic selection rule, based on the value of pheromones deposited on the edges between the nodes by other ants. So using the information stored in pheromone intensity, ants traverse a path in the construction graph. This paths is a solution to te discrete optimization problem. Over a period of time, tha path that corresponds to the optimal solurion fortt he optimization problem gets high pheromone deposition. Any ant traversing the construction graph at this point will choose this path. In addition to pheromone intensity, some problem-spesific local heuristic are also used to guide the ants through the construction graph.
ACO has been successfully applied to a large number of combinatorial optimization problems, including travelling salesman problems; vehicle routing problems; and quadratic assignment problem. ACO also has been applied successfully to the scheduling problems, such as single machine problems; flow shop problems; and graph coloring problems.
(Panighrahi, B.K., Computational Intelligence in Power Engineering, Springer, 2010, pg.31-32)

 Ant Colony Optimization (new-better)

Antcolonyoptimization (ACO) [36] is one of the most recent techniques for approximate optimization. The inspiring source of ACO algorithms are real antcolonies. More specifically, ACO is inspired by the ants' foraging behavior. At the core of this behavior is the indirect communication between the ants by means of chemical pheromone trails, which enables them to find short paths between their nest and food sources. This characteristic of real antcolonies is exploited in ACO algorithms in order to solve, for example, discrete optimization problems.3
Depending on the point of view, ACO algorithms may belong to different classes of approximate algorithms. Seen from the artificial intelligence (AI) perspective, ACO algorithms are one of the most successful strands of swarm intelligence [16] and [17]. The goal of swarm intelligence is the design of intelligent multi-agent systems by taking inspiration from the collective behavior of social insects such as ants, termites, bees, wasps, and other animal societies such as flocks of birds or fish schools. Examples of “swarm intelligent” algorithms other than ACO are those for clustering and data mining inspired by ants' cemetery building behavior [55] and [63], those for dynamic task allocation inspired by the behavior of wasp colonies[22], and particle swarm optimization[58].
Seen from the operations research (OR) perspective, ACO algorithms belong to the class of metaheuristics [13], [47] and [56]. The term metaheuristic, first introduced in [46], derives from the composition of two Greek words. Heuristic derives from the verb heuriskein (ευρισκειν) which means “to find”, while the suffix meta means “beyond, in an upper level”. Before this term was widely adopted, metaheuristics were often called modern heuristics[81]. In addition to ACO, other algorithms such as evolutionary computation, iterated local search, simulated annealing, and tabu search, are often regarded as metaheuristics. For books and surveys on metaheuristics see [13], [47], [56] and [81].
This review is organized as follows. In Section 2 we outline the origins of ACO algorithms. In particular, we present the foraging behavior of real antcolonies and show how this behavior can be transfered into a technical algorithm for discrete optimization. In Section 3 we provide a description of the ACO metaheuristic in more general terms, outline some of the most successful ACO variants nowadays, and list some representative examples of ACO applications. In Section 4, we discuss some important theoretical results. In Section 5, how ACO algorithms can be adapted to continuous optimization. Finally, Section 6 will give examples of a recent successful strand of ACO research, namely the hybridization of ACO algorithms with more classical AI and OR methods. In Section 7 we offer conclusions and an outlook to the future.

The origins of antcolonyoptimization:

Marco Dorigo and colleagues introduced the first ACO algorithms in the early 1990's [30], [34] and [35]. The development of these algorithms was inspired by the observation of antcolonies. Ants are social insects. They live in colonies and their behavior is governed by the goal of colony survival rather than being focused on the survival of individuals. The behavior that provided the inspiration for ACO is the ants' foraging behavior, and in particular, how ants can find shortest paths between food sources and their nest. When searching for food, ants initially explore the area surrounding their nest in a random manner. While moving, ants leave a chemical pheromone trail on the ground. Ants can smell pheromone. When choosing their way, they tend to choose, in probability, paths marked by strong pheromone concentrations. As soon as an ant finds a food source, it evaluates the quantity and the quality of the food and carries some of it back to the nest. During the return trip, the quantity of pheromone that an ant leaves on the ground may depend on the quantity and quality of the food. The pheromone trails will guide other ants to the food source. It has been shown in [27] that the indirect communication between the ants via pheromone trails—known as stigmergy[49]—enables them to find shortest paths between their nest and food sources. This is explained in an idealized setting in Fig. 1.
Full-size image
Fig. 1. An experimental setting that demonstrates the shortest path finding capability of antcolonies. Between the ants' nest and the only food source exist two paths of different lengths. In the four graphics, the pheromone trails are shown as dashed lines whose thickness indicates the trails' strength.
As a first step towards an algorithm for discrete optimization we present in the following a discretized and simplified model of the phenomenon explained in Fig. 1. After presenting the model we will outline the differences between the model and the behavior of real ants. Our model consists of a graph G=(V,E), where V consists of two nodes, namely vs (representing the nest of the ants), and vd (representing the food source). Furthermore, E consists of two links, namely e1 and e2, between vs and vd. To e1 we assign a length of l1, and to e2 a length of l2 such that l2>l1. In other words, e1 represents the short path between vs and vd, and e2 represents the long path. Real ants deposit pheromone on the paths on which they move. Thus, the chemical pheromone trails are modeled as follows. We introduce an artificial pheromone value τi for each of the two links ei, i=1,2. Such a value indicates the strength of the pheromone trail on the corresponding path. Finally, we introduce na artificial ants. Each ant behaves as follows: Starting from vs (i.e., the nest), an ant chooses with probability
(1)
View the MathML source
between path e1 and path e2 for reaching the food source vd. Obviously, if τ1>τ2, the probability of choosing e1 is higher, and vice versa. For returning from vd to vs, an ant uses the same path as it chose to reach vd,4 and it changes the artificial pheromone value associated to the used edge. More in detail, having chosen edge ei an ant changes the artificial pheromone value τi as follows:
(2)
View the MathML source
where the positive constant Q is a parameter of the model. In other words, the amount of artificial pheromone that is added depends on the length of the chosen path: the shorter the path, the higher the amount of added pheromone.
The foraging of an antcolony is in this model iteratively simulated as follows: At each step (or iteration) all the ants are initially placed in node vs. Then, each ant moves from vs to vd as outlined above. As mentioned in the caption of Fig. 1(d), in nature the deposited pheromone is subject to an evaporation over time. We simulate this pheromone evaporation in the artificial model as follows:
(3)
View the MathML source
The parameter ρ∈(0,1] is a parameter that regulates the pheromone evaporation. Finally, all ants conduct their return trip and reinforce their chosen path as outlined above.
We implemented this system and conducted simulations with the following settings: l1=1, l2=2, Q=1. The two pheromone values were initialized to 0.5 each. Note that in our artificial system we cannot start with artificial pheromone values of 0. This would lead to a division by 0 in Eq. (1). The results of our simulations are shown in Fig. 2. They clearly show that over time the artificial colony of ants converges to the short path, i.e., after some time all ants use the short path. In the case of 10 ants (i.e., na=10, Fig. 2(a)) the random fluctuations are bigger than in the case of 100 ants (Fig. 2(b)). This indicates that the shortest path finding capability of antcolonies results from a cooperation between the ants.
Full-size image
Fig. 2. Results of 100 independent runs (error bars show the standard deviation for each 5th iteration). The x-axis shows the iterations, and the y-axis the percentage of the ants using the short path.
The main differences between the behavior of the real ants and the behavior of the artificial ants in our model are as follows:
(1)
While real ants move in their environment in an asynchronous way, the artificial ants are synchronized, i.e., at each iteration of the simulated system, each of the artificial ants moves from the nest to the food source and follows the same path back.
(2)
While real ants leave pheromone on the ground whenever they move, artificial ants only deposit artificial pheromone on their way back to the nest.
(3)
The foraging behavior of real ants is based on an implicit evaluation of a solution (i.e., a path from the nest to the food source). By implicit solution evaluation we mean the fact that shorter paths will be completed earlier than longer ones, and therefore they will receive pheromone reinforcement more quickly. In contrast, the artificial ants evaluate a solution with respect to some quality measure which is used to determine the strength of the pheromone reinforcement that the ants perform during their return trip to the nest.
(Physics of Life Reviews, Volume 2, Issue 4, December 2005, Pages 354–357)