Archive for the 'Uncategorized' category

An Introduction to Patent Monetization Resources For Corporations and Entrepreneurs

Jul 14 2020 Published by under Uncategorized

For corporations and entrepreneurs seeking to monetize their un- or under-utilized IP rights for the first time, it can be difficult to know where to begin. The patent monetization market is not yet mature and, as with other emerging marketplaces, no established methodologies and few experts exist to guide owners through the process. Today, there are as many as 17 different business models used. More will likely spring up as the market continues to evolve, even while some of the current models will certainly fall away. With such a range of options, it is not surprising that those seeking to sell their patent rights may be confused about what path to take. This article is intended to provide an overview of ways that corporate and individual IP owners can most effectively monetize their rights in today’s market. The models discussed in this article were chosen because they are currently the most common. Significantly, due to the great variability in patents and the individual needs of IP owners, the best model for a particular person or organization might actually one that is not discussed here.  Nonetheless, it is hoped that after reading this, a corporation or entrepreneur seeking to sell their rights for the first time will be better able to understand and execute on the opportunities and challenges present today in the patent monetization market.

Thinking of Selling a Patent Directly to a Corporation Without an Intermediary? Forget About It Most IP owners assume that it is possible to sell their rights directly to a company that might play or seek to play in the product or technology space covered by the patent. This is rarely the case, however. When I was employed as a senior attorney in a consumer products company, it was corporate policy to reject all unsolicited offers to purchase or license patents that came into the organization. Thus, an owner did not stand a chance to get their rights sold to my company. This absolute prohibition on unsolicited ideas is not the policy at all companies, but, in truth, few companies today actively seek to acquire products and technology from outside sources (although this is starting to change with the drive toward open innovation at many companies). Thus, even if a patent is a perfect fit for a company’s offerings, most organizations will nonetheless prefer to pass on a purchase opportunity because external acquisition is not part of their technology development model. It is therefore doubtful that most patent owners can hope to successfully sell their rights directly to a corporation because the latter is not in the business of buying patents generally, and specifically not from individual owners.

  Aggregators: Buyers of Patents if a Patent Owner Can Get a Foot in Their Door In recent years, companies have emerged that hold business models centered on the buying of patents held by others. Well known aggregators today include Intellectual Ventures, RPX and Allied Security Trust. Each of these companies has a different reason that it seeks to acquire patents, but each can serve as a great resource for owners seeking to sell their IP rights in certain technology areas. Nonetheless, there are many more patent owners seeking to sell their rights than existing aggregator buying opportunities. As a result, if an owner obtains a “no” answer, how does he know it is because his patent is worth nothing to the aggregator or whether it’s because he did not know the right person to get his rights in front of at the aggregator company? For most IP owners, especially those participating in the monetization market for the first time, patent aggregators will not serve as a likely direct purchaser of their rights.  

Brokers: Facilitors of Patent Sales, For a Price Brokers such as ThinkFire, IPotential and IP Transactions Group can assist IP owners in presenting their patent to a likely buyer, the most likely of which are patent aggregators, non-practicing entities (“NPE’s”) and, sometimes, corporations. By leveraging their relationships and reputations, brokers effectively serve as “filters” for potential patent acquirers to streamline and improve the quality of patent buying opportunities.  Put simply, patent buyers trust their patent brokers to “separate the wheat from the chaff” to make it easier for them to identify and act on good patent buying opportunities.  A broker who is trusted by a patent buyer can thus present the latter with a buying opportunity that the buyer would not have given a second glance to if the same patent had been offered to them outside of the broker-buyer relationship.   There is a substantial cost to hiring a broker, however–typically about 25 % of the total sale price. Patent brokers also require exclusivity. Thus, when a patent owner selects a particular broker to represent him in the sale, he must trust that the broker will find the best deal. I nonetheless believe that the knowledge and expertise available with a good broker can allow a patent owner to obtain a final purchase price for his rights that more than justifies the broker fee. In particular, the best brokers maintain a large network of potential purchasers of patents, including aggregators, NPE’s and, in some cases, corporations that have expressed an interest in buying third party IP rights.  

I believe such broad networks serve a critical function in improving the efficiency of the monetization market by possibly raising the final purchase price.  When a patent is offered through a quality broker, he will ensure that each party participating in the process also knows who else is being offered the opportunity. Such transparency could also result in an increase in the final purchase price when one potential purchaser seeks to ensure that another potential purchaser not acquire that same right. For example, a corporation might increase its offer to prevent an NPE from obtaining that patent for the purpose of bringing suit against the corporation. This scenario means that those most interested in acquiring the patent will bring their best offer to the table, a fact which should improve the final price paid.   A further benefit of selling through a good broker is that they will typically conduct market analysis of the rights to set a rationally-based entry level price. Specifically, the broker will set the price based upon what comparable patents have been sold for in the past. These figures normally are not public, so a broker with several sales under his belt will likely set a more accurate initial sale price by virtue of the fact that he is privy to information that allows him to do so. Notably, even an experienced broker might incorrectly estimate the likely floor price, but when the patent is offered to many likely buyers, the market will typically act to reset the price to one more acceptable to potential buyers.    

Beware of Finders Who Say They are Brokers A significant problem with many people who hold themselves out as patent brokers is that some are not “brokers” at all. Rather, they are “finders” for aggregators or other buyers of patents such as NPE’s (but likely not corporations). Like regular brokers, these finders maintain relationships with likely buyers. When accepting a patent for sale to a potential buyer, the finder likely already knows whether it will be purchased by its contact. In this scenario, the finder actually does little to earn his 25% fee other than maintain a relationship with the ultimate purchaser. Moreover, many of these brokers actually “double dip” because they obtain a fee from the purchaser for bringing the opportunity to them, as opposed to another potential buyer. The finder thus might hold divided loyalties: should they try to maximize the price obtained for his client’s patent when they might never see an opportunity from that seller again, or should they keep the price reasonable so they don’t ruin their relationship with their buyer to whom they might bring several buying opportunities to each year?   Clearly, this scenario is rife with questionable ethics, but the reality of the current monetization market is that no licensing is required for someone to call himself a “patent broker,” and the rule is definitely “buyer beware.” As things stand in today’s unregulated broker market, the best way to find a quality patent broker is to seek referrals from someone who understands the market and/or who has successfully sold patents through a broker in the past.   

Patent Auctions: Selling in the Open to the Highest Bidder The final common vehicle for selling patent rights is the public auction setting. Today, the most prevalent auction is conducted by Ocean Tomo, which currently holds 2 auctions each year. Ocean Tomo is very selective about what patents it takes into each auction, a fact that limits the ability of many patent owners to participate in this model. Ocean Tomo obtains a fee from the seller and the buyer, and it is my understanding that the net fee amounts to approximately 25 % paid to the auction house. While I have not personally been involved in an auction, I have heard mixed things from people who have participated as both buyers and sellers in these auctions. My sense is that an auction allows one to sell his patent in a transparent setting where the price is set by competitive bidding. This can be good when a patent is desired by multiple parties who are influenceable by the “heat” of a public auction process to increase their bids to result in a higher price for the seller.   In my view, one downside of the open auction process is that all participants know the price being offered, a fact that can lead to a lower final sale price if a patent does not garner excitement from the participants. This view was borne out in the most recent (April 2009) Ocean Tomo auction which was almost universally considered a failure. Buyers were lacking and, as a result, not only did few patents sell, the tenor of the auction itself was said to be very quiet and unexcited. This lack of enthusiasm from the auction participants no doubt reduced the overall success of the auction itself.   In contrast, in a private auction–such as that effectively set up when a quality broker sells a patent into a large network of potential buyers–the lack of transparency can result in a higher final price because the participants know who has been provided the opportunity to purchase but not the amount they have offered (if at all). A further possible downside to a public auction is that one can only sell his patent to someone who shows up to participate in the auction. With a broker-conducted private auction, however, someone who may not actively be seeking to buy a patent at that time will be presented with the opportunity to buy. Thus, the number of potential buyers can be expanded with the use of a broker.  

It’s as Clear as Mud Now, Right? As noted at the outset of this article, the IP monetization market is only just now emerging as a viable way to obtain value from un- or under-utilized assets. In view of this, most patent owners just starting into will be confused about how to proceed in a manner that maximizes the price obtained. If one owns patent rights and seeks to sell them today, it is my recommendation that he learn as much as possible about the process. And, as with many business situations, checking references and seeking recommendations from those with experience as patent sellers and counselors to IP owners will be critical to success in monetization.  Personally, I am looking forward to the day when more openness exists in the marketplace so that patent owners can better gauge the quality and qualifications of those participants in the process.               

Comments are off for this post

Pcm In Textiles

Jul 13 2020 Published by under Uncategorized

Phase Change Materials (PCM) in Textiles

In textile industry, protection from extreme environmental conditions is a very crucial requirement. Clothing that protects us from water, extreme cold, intensive heat, open fire, high voltage, propelled bullets, toxic chemicals, nuclear radiations, biological toxins, etc are some of the illustrations.

Such clothing is utilized as sportswear, defense wear, firefighting wear, bulletproof jackets and other professional wear. Textile products can be made more comfortable when the properties of the textile materials can adjust with all types of environments.

At present, for fulfilling the above requirement Phase Change Materials (PCM) is one such intelligent material. It absorbs, stores or discharges heat in accordance with the various changes in temperature and is more often applied to manufacture the smart textiles.

Phase Change Materials

‘Phase Change’ is the process of going from one stat to another, e.g. from solid to liquid. Any material that experiences the process of phase change is named as Phase Change Materials (PCM).

Such materials collect, discharge or absorb heat as they oscillate between solid and liquid form. They discharge heat as they transform to a solid state and absorb as they go back to a liquid state. There are three basic phases of matter solid, liquid and gas, but others like crystalline, colloid, glassy, amorphous and plasma phases are also considered to exist.

This fundamental phenomenon of science was initially developed and used for building space suits for astronauts for the US Space Program. These suits kept the astronauts warm in the black void of space and cool in the solar glare. Phase Change Materials are compounds, which melt and solidify at specific temperatures and correspondingly are able to retain or discharge large amounts of energy.

The storage of thermal energy by changing the phase of a material at a constant temperature is classified as ‘latent heat’, i.e., changing from a liquid state to a solid state. When a PCM experiences a phase change, a huge amount of energy is needed. The most significant characteristic of latent heat is that it involves the transfer of much larger amounts of energy than sensible heat transfer.

Quiet a few of these PCMs change phases within a temperature range just above and below human skin temperature. This characteristic of some substances is used for making protective all-season outfits, and for abruptly changing environment. Fibre, fabric and foam with built-in PCMs store the warmth of body and then release it back to the body, as the body requires it. Since the procedure of phase change is dynamic, the materials are continually shifting from solid to liquid and back according to the physical movement of the body and outside temperature. Furthermore, Phase Change Materials are used, but they never get used up.

Phase Change Materials are waxes that have the distinctive capacity to soak and emit heat energy without altering the temperature. These waxes include eicosane, octadecane, Nonadecane, heptadecane and hexadecane. They all possess different freezing and melting points and when mixed in a microcapsule it will accumulate heat energy and release heat energy and maintain their temperature range of 30-34°C, which is very comfortable for the body.

The amount of heat absorbed by a PCM in the actual phase change with the amount of heat absorbed in an ordinary heating procedure can be evaluated by taking water as a PCM. The melting of ice into water leads to the absorption of latent heat of nearly 335 J/g. If water is further boiled, a sensible heat of only 4 J/g is absorbed, while the temperature increases by one degree. Hence, the latent heat absorption in the phase change from ice into water is about 100 times greater than the sensible heat absorption.

How to assimilate PCMs in fabrics?

The micro encapsulated PCM can be combined with woven, non woven or knitted fabrics.

The capsules can be added to the fabric in various ways such as:

Microcapsules: Microcapsules of various shapes – round, square and triangular within fibres at the polymer stage. The PCM microcapsules are permanently fixed within the fibre structure during the wet spinning procedure of fibre manufacture. Micro encapsulation gives a softer hand, greater stretch, more breathability and air permeability to the fabrics.

Matrix coating during the finishing process: The PCM microcapsules are embedded in a coating compound like acrylic, polyurethane, etc, and are applied to the fabric. There are many coating methods available like knife-over-roll, knife-over-air, pad-dry-cure, gravure, dip coating and transfer coating.

Foam dispersion: Microcapsules are mixed into a water-blown polyurethane foam mix and these foams are applied to a fabric in a lamination procedure, where the water is removed from the system by the drying process.

Body and clothing systems

The needed thermal insulation of clothing systems mainly depends on the physical activity and on the surrounding conditions such as temperature and relative humidity. The amount of heat produced by humans depends a lot on the physical activity and can differ from 100W while resting to over 1000W during maximum physical performance.

Specially, during the cooler seasons (approx 0°C), the suggested thermal insulation is defined in order to make sure that the body is adequately warm when resting. At extreme activity, which is often a case with winter sports, the body temperature rises with enhanced heat production. To make this increase within a certain limit, the body perspires in order to withdraw energy from the body by evaporative cooling. If the thermal insulation of the clothing is decreased during physical activity, a part of the generated heat can be removed by convection, thus the body is not needed expected to perspire so much.

The quality of insulation in a garment in terms of heat and cold will be widely managed by the thickness and density of its component fabrics. High thickness and low density make insulation better. It is observed in many cases that thermal insulation is offered by air gaps between the garment layers.

However, the external temperature also influences the effectiveness of the insulation. The more extreme the temperature, be it very high or very low, the less effective the insulation becomes. Thus, a garment designed for its capability to protect against heat or cold is chosen by its wearer on the expectation of the climate in which the garment is to be worn.

Though, a garment produced from a thick fabric will have more weight, and the freedom of movement of the wearer will be restricted. Clearly then a garment designed from an intelligent fabric, whose nature can change according the external temperature, can offer superior protection. However, such a garment must be comfortable for the wearer.

Temperature change effect of PCMs

PCM microcapsules can create small, transitory heating and cooling effects in garment layers when the temperature of the layers reaches the PCM transition temperature. The effect of phase change materials on the thermal comfort of protective clothing systems is likely to be highest when the wearer is frequently going through temperature transients (ie, going back and forth between a warm and cold environment) or from time to time touching or handling cold objects. The temperature of the PCM garment layers must vary frequently for the buffering effect to continue.

The most obvious example is changing of water into ice at 0° and to steam at 100°. There are many products that change phase near body temperature and are now being integrated in fibres and laminates, or coating substrates, that will alter phase at or near body temperature and so support the equilibrium of the body temperature and keep it more constant. It is for athletes in extreme conditions and people who are involved in extreme sports such as mountaineering and trekking. It is going to be used in industrial applications where people are very mobile, for example, in and out of cool rooms.

Effects on fabrics

When the condensed PCM is heated to the melting point, it absorbs heat energy as it moves from a solid state to a liquid state. This phase change produces a short-term cooling effect in the clothing layers. The heat energy may come from the body or from a warm environment. Once the PCM has totally melted the storage of heat stops

If the PCM garment is worn in a cold environment where the temperature is below the PCM’s freezing point and the fabric temperature drops below the transition temperature, the micro encapsulated liquid PCM will come back to a solid state, generating heat energy and a momentary warming effect. The developers assert that this heat exchange makes a buffering effect in clothing, minimize changes in skin temperature and continue the thermal comfort of the wearer.

The clothing layer(s) consisting PCMs must go through the transition temperature range before the PCMs change phase and either produce or absorb heat. Therefore, the wearer has to make some effort for the temperature of the PCM fabric to change. PCMs are transient phenomena. They have no effect in steady state thermal environment.

Active microclimate cooling systems need batteries, pumps, circulating fluids and latest control devices to give satisfactory body cooling, but their performance can be adjusted and made to continue for long period of time. They are, however, costly and complicated. Present passive microclimate devices use latent phase change; either by liquid to gas evaporation of water (Hydroweave), a solid to liquid phase shift by a cornstarch/water gel, or with a paraffin that is contained in plastic bladders.

The liquid evaporation garment is cheaper, but will only give minimum or short-term cooling in the high humid environment found in protective clothing. They must also be re-wetted to revitalize the garments for re-application. The water/ starch gel-type cooling garment is presently preferred by the military, and can offer both satisfactory and long time cooling near 32°F (0 degree Celsius), but it can also feel very cold to the skin and needs a very cold freezer (5°F) to completely recharge or rejuvenate the garment. When completely charged, its gel-PCMs are somewhat rigid blocks, and the garment has limited breathability.

The other paraffin PCM garments are comparatively cheaper, but their plastic bladders can split, thus dripping their contents or leading to a serious fire hazard. In addition, their paraffin PCM melts about 65°F (18°C) and must be recharged at temperatures below 50°F (10°C) in a refrigerator or ice-chest. Their rate of cooling also reduces with time because paraffin blocks are thermal insulators and control the heat that can be transmitted into or out of them. The plastic bladders used to contain the PCM also strictly limit airflow and breathability of the garment, thus reducing their comfort.

Uses of PCM

Automotive textiles

The scientific theory of temperature control by PCMs has been deployed in various ways for the manufacturing of textiles. In summer, the temperature inside the passenger compartment of an automobile can increase significantly when the car is parked outside. In order to regulate the interior temperature while driving the car, many cars are equipped with air conditioning systems; though, providing adequate cooling capacity needs a lot of energy. Hence the application of Phase Change Material technology in various uses for the automotive interior could offer energy savings, as well as raising the thermal comfort of the car interior.

Apparel active wears

Active wear is expected to provide a thermal equilibrium between the heat produced by the body while performing a sport and the heat released into the environment. Normal active wear garments do not satisfy these needs always. The heat produced by the body in laborious activity is often not discharged into the environment in the required amount, thus resulting in thermal stress situation. On the other hand, in the periods of rest between activities, less heat is produced by the human body. Considering the same heat release, hypothermia is likely to occur. Application of PCM in clothing supports in regulating the thermal shocks, and thus, thermal stress to the wearer, and supports in increasing his/ her efficiency of work under high stress.

Lifestyle apparel – elegant fleece vests, men’s and women’s hats, gloves and rainwear.

Outdoor sports – apparel jackets and jacket linings, boots, golf shoes, running shoes, socks and ski and snowboard gloves.

From genuine uses in space suits and gloves, phase change materials are also used in consumer products.

Aerospace textiles

Phase Change Materials used in current consumer products primarily were made for application in space suits and gloves to protect astronauts from higher temperature fluctuations while performing extra-vehicular activities in space.

The usefulness of the insulation stems from micro encapsulated Phase Change Materials (micro-PCMs) primarily created to make warm the gloved hands of space-strolling astronauts. The materials were accepted ideal as a glove liner, to support during temperature extremes of the space environment.

Medical textiles

Textiles having Phase Change Materials (PCMs) could soon find uses in the medical sector. To raise the thermo-physical comfort of surgical clothing such as gowns, caps and gloves. In bedding products like mattress covers, sheers and blankets. A product, which helps the effort to stay the patient warm enough in an operation by giving insulation tailored to the body’s temperature.

Other uses of PCM

Phase Change Materials are at the moment being used in textiles, which include the extremities: gloves, boots, hats, etc. Various PCMs can be selected for various uses. For example the temperature of the skin near the torso is about 33°C (91°F). Though, the skin temperature of the feet is nearly 30 -31 °c. These PCM materials can be useful down to 16°C, enough to ensure the comfort of someone wearing a ski boot in the snow. They are increasingly applied in body-core protection and it will shift into the areas of blankets, sleeping bags, mattresses and mattress pads.

PCM Types

Standard phase change materials are generally a polymer/carrier filled with thermally conductive filler, which changes from a solid to a high-viscosity liquid (or semi-solid) state at a certain transition temperature. These materials conform well to irregular surfaces and possess wetting properties like thermal greases, which considerably decrease the contact resistance at the distinctive interfaces. Because of this composite structure, phase change materials are capable of surviving against mechanical forces during shock and vibration, safeguarding the die or component from mechanical damage. Moreover, the semi-solid state of these materials at high temperature determines issues linked to “pump-out” under thermo-mechanical flexure.

When heated to a targeted transition temperature, the material considerably softens to a near liquid-like physical state in which the thermally conductive material slightly expands in volume. This volumetric growth makes the more thermally conductive material to flow into and replace the microscopic air gaps existed in between the heat sink and electronic component. With the air gaps filled between the thermal surfaces, a high degree of wetting of the two surfaces lessens the contact resistance.

In general, there are two types of phase changes materials:

. Thermally conductive and electrically insulating.

. Electrically conductive.

The main dissimilarity between the thermally and electrically conductive materials is the film or carrier that the phase change polymer is coated with. With the electrically insulating material, lowest amount of voltage isolation properties can be achieved.

Analysis of the thermal barrier function of Phase Change Materials in textiles

Producers can now use PCMs to give thermal comfort in a huge range of garments. But to know how much and what kind of PCM to apply, as well as modification of the textile, in order to make a garment fit for its purpose, it is essential to quantify the effect of the active thermal barrier offered by these materials.

The total thermal capacity of the PCM in many products depends on its specific thermal capacity and its quantity. The required quantity can be expected by considering the application conditions, the desired thermal effect and its duration and the thermal capacity of the specific PCM. The structure of the carrier system and the end-use product also affects the thermal efficiency of the PCM, which has to be measured with respect to the material selection and the product design.

Prospect of PCM

The main challenge in developing textile PCM structure is the method of their use. Encapsulation of PCMs in a polymeric shell is an evident selection, but it adds stiff weight to the active material. Efficient encapsulation, core-to-wall ratio, out put of encapsulation, stability during application and incorporation of capsules onto fabric structure are some of the technological aspects being measured.

Though PCMs are being promoted in various types of apparel and connected products, the applications in which they can really work are limited. As superior test methods are developed for PCMs, makers of PCM materials and garments will have to further cautiously target the markets in which their products do work well.

Conclusion

Since a huge amount has been invested in research and development in these areas in the developed counties, it is expected that very soon all-season outfits will be mass-produced. For example, in Britain, scientists have designed an acrylic fibre by integrating microcapsules covering Phase Change Materials. These fibres have been used for producing lightweight all-season blankets.

Many garment making companies in USA are now producing many of such garments, like thermal underwear and socks for inner layer, knit shirt or coated fleece for insulating layer; and a jacket with PCM interlines for outer layer, beside helmets, other head gears and gloves. Such clothing can maintain warm and comfortable temperatures in the extreme of both weathers. There is no doubt that textile which integrate PCMs will find their way into several uses in the near future.

Comments are off for this post

Fertilizer Industry in India Contributes 25 Percent to GDP

Jul 13 2020 Published by under Uncategorized

India is basically an agricultural country which economy depends largely upon its agrarian produce. Agricultural sphere contributes about 25% to the country’s GDP. As a result, Indian fertilizer industry has tremendous scope in and outside the country as it is one of the allied parts of agriculture.

Today, Indian Fertilizer Industry is developing in terms of technology. Indian manufacturers are adopting advanced manufacturing processes to prepare innovative new products for Indian agriculture. India has entitled as the third largest producer and exporter of nitrogenous fertilizer.

Growth of Fertilizer Industry in India

Fertilizer industry in India is meeting all the requirements of agricultural industry since the time of its inception in 1906. The first plant for fertilizers manufacture was set up in the same year in Ranipet, Chennai. Then established the first two large-sized fertilizer plants, one was the Fertilizer & Chemicals Tranvancore of India Ltd. (FACT) in Cochin, Kerala, and the another one was Fertilizers Corporation of India (FCI) in Sindri, Bihar. These two were established as pedestal fertilizer units to have self sufficiency in the production of foodgrains. Afterwards, the industry gained impetus in its growth due to green revolution in late sixties, followed by seventies and eighties when fertilizer industry witnessed an incredible boom in the fertilizer production.

The tremendous demand of fertilizers has led the country to invest huge in the public, co-operative and in private sectors. At present, India has more than 57 large sized plants of fertilizers, manufacturing wide assortment of fertilizers including nitrogenous, phosphatic, Ammonium Sulphate (AS), Calcium Ammonium Nitrate (CAN) urea, DAP and complex fertilizers. Apart from it, there are other 64 small and medium scale Indian manufacturers producing fertilizers.

Here is the list of some public sector Indian fertilizer industries;

– Madras Fertilizers Limited

– National Fertilizers Limited

– Hindustan Fertilizer Corporation Limited

– Steel Authority Of India Limited

– Fertilizers & Chemicals Travancore Limited

– Rashtriya Chemicals &Fertilizers Limited

– Paradeep Phosphates Limited

– Pyrites, Phosphates & Chemicals Limited

– Neyveli Lignite Corporation Limited

Some of the major private sector fertilizer companies in India are:

– Balaji Fertilizers Private Limited

– Ajay Farm-Chem Private Limited

– Chambal Fertilizers & Chemicals Limited

– Bharat Fertilizer Industries Limited

– Gujarat Narmada Valley Fertilizer Co. Limited

– Southern PetroChemical Industries Corporation Limited

– Godavari Fertilizers & Chemical Limited

– Shri Amba Fertilizers (I) Private Limited

– Gujarat State Fertilizers & Chemicals Limited

– Maharashtra Agro Industrial Development Corporation

– Mangalore Chemicals & Fertilizers Limited

The speedy growth in the fertilizers production is swaying the Indian manufacturers to transform into Indian exporters, and helping them create a long lasting impression on global consumers.

Comments are off for this post

Halogen-Free Cables – Green, Safe, and Healthy

Jul 13 2020 Published by under Uncategorized

Perhaps more so today than ever before, the issues of health, safety, and environmental impact are top priorities for manufacturers of all types. The wire and cable industry is making significant advances in products and standards in order to satisfy these important needs. Halogen-free cables are an example of this type of industry advancement.

Halogen-free cables are manufactured without the reactive elements of the halogen family: chlorine, fluorine, bromine, iodine, and astatine. Halogens are an effective flame-retardant, so they have traditionally been used as insulation material. However, they can catch fire and in the event they do, the results can be catastrophic. While they’re very stable in their natural state, halogens create highly toxic and corrosive fumes if burned.

The gases produced by burning halogens create an acid when mixed with even small amounts of water, like the moisture found in lungs, eyes, and throats. These chemical reactions can disorient and injure people who are trying to escape a blaze. Clearly, this creates a hazardous situation wherever an accidental fire can occur. On another note, halogen fumes from even minor fires can results in thousands or sometimes millions of dollars in corrosion damage to computer equipment and circuits.

The smoke and fumes are so harmful that governments and municipalities are moving to introduce stricter halogen regulations. Many European and other international countries have already banned the use of cables containing halogen from construction. Due to these increasing regulations, more and more manufacturers are making the switch to low-toxicity, halogen-free options.

Halogen-free cables offer the added benefit of being more environmentally-friendly. They emit considerably lower levels of carbon monoxide (CO) – sometimes as much as 360% less carbon output overall. Switching to these cables will help to minimize your company’s carbon footprint and effect on global climate change. Additionally, halogen-free cables are low-smoke products because they produce far fewer air-borne particles.

Halogen-free cables are perfect for applications that require high performance and reliability while offering outstanding safety, like public transportation or busy locations such as airports and shopping malls. The blend of low pollution, toxicity, and corrosion levels and outstanding product quality make halogen-free cables an option that should be considered by anyone purchasing wire and cable products.

Comments are off for this post

The City of Dneprodzerzhinsk, Dnepropetrovsk, Ukraine

Jul 13 2020 Published by under Uncategorized

The territory where the city of Dneprodzerzhinsk is located today, belongs to five sites of Ukraine, occupied by the people during the Paleolith epoch (100-40 millennium BC). During the Kiev Russian period the territory of the future Dneprodzerzhinsk, important trading occurred with the Varangians from Greece. According to the legend, the Ukrainian Cossacks played an important role in the city’s formation. The villages Romankovo and Kamenskoye, on which place Dneprodzerzhinsk is located, were founded by the Zaporozhye Cossacks. The first written mention of village Kamenskoye is dated 1750. In New Sechi (1734-1775) Kamenskoye was a part of Kodatsk of the Army Zaporozhye.

Building (1887-1889) by Polish, Belgian and French shareholders of Dneprovsky metal works on the land of the village Kamenskoye, redeemed at rural association, led to fast growth of the village. In the end of XIX – the XX-th century beginning settlements for employees and workers of factory – the Top and Bottom colonies grew. In 1896 there were 18 thousand inhabitants in Kamenskoy, and by 1913, the village had grown to 40407. In June, 1917 the Provisional government gave the village Kamenskoye the status of a city. On February, 1st, 1936 Kamenskoye was renamed Dneprodzerzhinsk. In 1938 its structure included villages Romankovo and Trituznoe. In days of industrialisation 1930-1950 in Dneprodzerzhinsk boiler-welding, nitrogen-mineral, cement and concrete factories, garment factory, car-building and a number of other enterprises were constructed.

Before the Great Patriotic War (World War II) Dneprodzerzhinsk had approximately 148, 000 inhabitants. The Great Patriotic War became a heartrending experience for the city. About 18 thousand citizens went to war on the fronts. About 11 thousand citizens were in the front lines of the war. During German occupation of the city which lasted 26 months, fascists shot 1069 citizens and 2999 persons were taken out for forced hard labour to Germany. On October, 25th, 1943 the city was released by the Soviet armies. In only 26 days after the release of the city, the first fusion at Dneprovsky metallurgical industrial complex was accomplished. The city’s complete recovery was finally finished in 1950.

In the post-war period the industrial complex of the city was replenished with new factories. The Dneprodzerzhinsk HYDROELECTRIC POWER STATION was placed in operation. From 1950-1980 the modern architectural shape of the city was formed. New buildings were built, especially on left bank of the Dniper River. In 1970 the city was awarded the order of the Red Labour Banner. According to the new Constitution of Ukraine, Vasily Jakovlevich Shvets was selected as mayor of Dneprodzerzhinsk that constituted as a city for the first time in 1996.

Dneprodzerzhinsk is the third city in value in area after Dnepropetrovsk and Krivoi Rog. Dneprodzerzhinsk, in its geography, history of economic development and an industrial profile has much in common with Dnepropetrovsk. Between these cities and along the rivers connecting them, railway and automobile roads connect the settlements where majority of inhabitants who work at the enterprises of both cities.

The main industries of Dneprodzerzhinsk are 1. Metallurgical – Dneprovsky metallurgical industrial complex of F. E. Dzerzhinskogo and Open Society which is one of the largest enterprises of an industrial complex in Ukraine with a full metallurgical cycle on release of 5600 thousand tons of agglomerate, 4350 thousand tons of pig-iron, 3850 thousand tons of a steel, 3829 thousand tons of ready hire. Open Society is the unique supplier in Ukraine which rents axial preparation for railway transportation, piles of type Larsen, rails contact for underground, steel grinding spheres and trumpet preparation;

2. Machine-building – Open Society Dneprovagonmash (Of the newspaper “Truth”), one of leading enterprises of Ukraine and the CIS countries on designing and manufacturing of freight cars for the main railways and various industries; 3. Chemical and cocechemistry- chemical industrial complex, 2 cocechemistry factories, DneproAzot; 4. The industry of building materials – a cement works (Open Society Dneprotsement), precast concrete factory; 5. A number of the enterprises of the food-processing industry; 6. Port on Dnepr River, a railway junction, road service station;

In Dneprodzerzhinsk there are 47 large industrial enterprises and 1188 enterprises of small and average business. The structure of industrial production of the city consists of metallurgy and metal processing (67 %), chemical branch (18 %), coke manufacture (5 %), mechanical engineering (2 %), manufacture of building materials, electric power industry, food, easy and other industries prevails. The major kinds of production are pig-iron, steel, hire, cement, coke, mineral fertilizers, the electric power, main and industrial cars. In the past few years new kinds of production of buses was introduced.

There are also 5 design and research organizations. One of the major ones is the Ukrainian State research project Institute of the Nitric industry and products of organic synthesis. The institute carries out modernization of the operation and designing of new manufacturing processes in the chemical and allied industries. Under institute projects in territory of the CIS and the far abroad it is constructed over 100 units, of them 31 – in Ukraine. Also the State project institute “Dniprodzerginsk Civil Project” having 55-year-old experience of release of the design documentation on building of a city works.

Among the city educational institutions are Dneprodzerzhinsk State Technical University, industrial, metallurgical, power, chemistry-technological, trade and economic technical schools, medical and musical schools. The Dneprodzerzhinsk State Technical University was founded April, 25th, 1920 under the decision of Ekaterinoslavsky Provincial Department of Vocational Training in the city Kamjansky (nowadays Dneprodzerzhinsk) where one of the largest metal works of the south of the country has been located. The Dneprodzerzhinsk State Technical University has passed through the stages of formation, development and blossoming. In 1920 the Dneprodzerzhinsk Technical school became a technical school with the right of release of engineers of metallurgical specialty.

According to the decision of the High council of the National economy of the USSR on May, 24th 1930, the Evening metallurgical institute was founded. In the early 1930’s Kamjansky Evening Metallurgical Institute became the original educational industrial complex of all-union value in which highly skilled technical shots and Union for the iron and steel industry in Ukraine were trained. The Great Patriotic War interrupted the peace work of the university. The most valuable equipment was taken out to Magnitogorsk and other cities in the Ural Mountains along with a lot of teachers and employees of university of a steel on for Native land protection. The loss endured by institute was very large. After city dismissal the big work of updating and institute revival has begun. Gradually the metallurgical institute was restored and has now continued fruitful activity.

In 1960 the factory-technical college and Dneprodzerzhinsk evening metallurgical institute of M. I. Arsenicheva were reorganized. The factory-technical college system has displayed the kind of change which has occurred in industrial development of Dneprodzerzhinsk. In particular, development of chemical enterprises was caused by reception on specialties: chemical technology of firm fuel, automation and complex mechanization of the enterprises of the chemical industry. Further specialization of preparation of engineers in factory-technical college system changed, but the metallurgical profile continued to prevail. The further development of the high school produced a new building. In 1967 the new studying-laboratory case 1968 was erected. The student’s hostel was put into operation and the Dneprodzerzhinsk high school was reorganized. There are new specialties: metallurgy and technology of welding manufacture, technology of inorganic substances and chemical fertilizers; the electric drive and automation of productions;technology of mechanical engineering, steel cutting machine tools and tools and others. The high school actually lost the metallurgical profile.

In May, 1967 Dneprodzerzhinsk Institute the factory-technical college has been reorganized in industrial institute of M. I. Arsenicheva. In its five faculties – metallurgical, technological, chemistry-technological, evening and technical about 5 thousand students studied. Preparation of engineering shots at institute was conducted in four directions: metallurgy, chemistry, mechanical engineering and power. The 1960s were characterized by growth of material base of institute, occurrence of new directions in preparation of experts and scientific researches. The main line of development of the institute has appeared in its growth, as higher educational institution and center of science. During the 1970s the institute was headed by new management;Loginov Vladimir Ivanovich became the rector of institute. During almost 25 years of his work as the rector, from 1963 to 1988 the high school has grown almost three times. New modern educational cases were built in 13, 15, 16, 17.

The high school was transformed from a small regional factory-technical college into the big modern industrial institute of republican and allied value. Entering each new decade, the Dneprodzerzhinsk Industrial Institute overcame problems that were achieved by the faculty, students, and employees. In 1970 Dneprodzerzhinsk was awarded the Labor Red Banner. The industrial institute of M. I. Arsenicheva already had in its structure six faculties: metallurgical, chemistry-technological, evening, technical. Teaching and educational and scientific work in high school was carried out by 30 chairs with 4 Doctors of sciences, professors worked; 110 candidates of sciences, senior lecturers; more than 140 teachers without scientific degrees. In 1968. V. I. Loginovym has been based the big museum of history of high school. During the early eighties an Accounting Department was founded.

The development of institute proceeded until the middle of the 1980s. Foundation for development of financial base and expansion of types of preparation of specialists, stopped up in 1960-70, settled systematic, for a few years to open new specialties and put a new complex into operation. In these years an industrial institute became a leading institute of higher in the cohort of metallurgical institutes of higher of republic, large highly skilled faculty advisers were formed, basic studying-methodical, research and educate work assignments were expressly determined, basic departments and faculties were formed, there were certain traditions and consuetudes of all collective of institute of higher.

In 1988. the rector of university was become Ogurtsov. His complex approach to working out actual problems of institute of higher life enabled gradually to transform an university to the high-quality new level. in October, 27 of 1993. The college of department of education of Ukraine a decision gave the Dneprodzerzhinsk industrial institute status of the State technical university. An institute of higher began to open new humanitarian and technical specialties: applied mathematics, jurisprudence (industrial right), machines and vehicles of food productions, metallurgy and chemistry of rare and dissipated metals, etc. In September of 1994. an economic faculty is created. For history of the existence a technical university was accumulated by

large experience of preparation of highly skilled specialists. In an institute of higher always worked and experimental specialists, teachers and scientists, work today. So, for example, in an institute fruitfully worked famous in the world metallurgist, in time academician, vice-president AN USSR of Bardin, professor Andreev (steel-worker), associate professor Brilliantov (blast-furnace operator), associate professor Poletaev (heating engineer), professor, manager by the department of Tsukanov (power engineering specialist), manager by the department of mathematics of Rubanov.

9 faculties function within the university: Metallurgical faculty, Chemistry-technological faculty, Faculty of economy and management, Faculty of sociology and philology, Faculty of electronics and computer technique, Mechanical faculty, Power faculty, Extra-mural faculty and Faculty of after diploma education.

Comments are off for this post

Stethoscope Cover Joins Fight Against MRSA and Spread of Infection

Jul 13 2020 Published by under Uncategorized

If you were to name one object that is associated with doctors, what would it be? Chances are that most of us would name Stethoscope – that trusty instrument that is always wrapped around their necks and is a constant companion of the doctors and nurses.

It is unusual to go to the doctor’s office and not be examined with the use of stethoscope, which is used to listen to heart, lungs and blood flow.

However, have you stopped to think about where else that stethoscope been? Whom else it has touched? Has your doctor cleaned your stethoscope after seeing his or her last patient? And are you (or your doctor) at some risk if this stethoscope has not been cleaned?

Stethoscope – Some “Dirty” Facts

Well, as it turns out, these are not trivial questions. It is also a bit strange, considering that for pretty much every other activity, the doctors or nurses take precautions to protect themselves and the patients. They wear fresh disposable examination gloves for each patient and then discard them after use. They use fresh paper liners for examination table, put new disposable tips for their digital thermometers when they take temperatures, and use fresh tips on otoscopes when they examine your ears. All for good reason – they do not want to contaminate the next patient with viruses or microbes from the previous patient.

So, what about the stethoscope? Should there be such precautions taken? Well, the answer is an emphatic yes. In fact there have been a number of studies that point out that majority of stethoscopes do in fact carry disease-carrying microbes including the drug-resistant bacteria MRSA. For example, one study showed that 90% of the physicians’ stethoscopes were contaminated with microbes, whereas another study showed that only a third of the healthcare workers cleaned their stethoscopes regularly. One of the microbes that is often found is a deadly drug-resistant bacteria called MRSA.

MRSA – A Deadly and Expensive Healthcare Problem

MRSA, short for methicillin-resistant Staphylococcus aureus, is a kind of Staphylococcus aureus (“staph”) bacteria that is resistant to some kinds of antibiotics. Amongst others, it is resistant to a family of antibiotics related to penicillin that includes antibiotics called methicillin and oxacillin. Almost a third of the population carries staph bacteria on their skin or in their noses, most of the time without any ill-effects. However, in sick patients whose immune-systems are already compromised, they can wreck havoc and can cause serious illnesses and even death.

Today, hospitals are a major cause of spread of infections. After-all, this is a high risk environment with a large number of sick, immuno-compromised patients concentrated in one area, and infections can spread rapidly here. Around June 2007, it was estimated that some 2.4% of all hospital patients (or 880,000 patients) had MRSA infection, a staggering number. Considering that MRSA infections have been rising exponentially (starting with only 2,000 cases in 1993, jumping to some 368,000 cases in 2005, and to 880,000 cases in 2007), today this number could be well over a million cases.

In addition to the risk to patients (and healthcare providers), which includes serious illnesses and deaths, this is one of the most expensive problems in today’s healthcare. At roughly $15,000 per infection, it costs US healthcare system roughly $30 billion to treat 2 million patient infections caused during hospital stays.

Stethoscope Safety Awareness Growing

It has been known for a long time that stethoscopes can carry disease-carrying microbes. In fact, there is significant research evidence going back to 1972, linking stethoscope to transmission of infection. It has also been repeatedly emphasized that stethoscopes should be cleaned with alcohol to prevent infection.

However, this is hard to put into practice. Compliance from the healthcare workers is poor, and frankly, in some of the busy environments such as trauma centers and busy emergency rooms, there may be little time to clean these.

Of late there have been some solutions in the marketplace. One is a disposable stethoscope cap – a step in the right direction, but it does not go far enough. It does cover the chest piece of the stethoscope but leaves nearly 80-90% of the stethoscope area including tubes still exposed. And when providers are wearing this around their necks all the time, it does not provide adequate protection.

Of course disposable stethoscopes have been around for some time, specifically for high risk areas. But at a cost of $3 or so, they are a somewhat expensive solution, and also for that cost, their quality (particularly acoustic quality, a key to stethoscope performance) is quite poor.

Disposable Stethoscope Sleeve – A New Weapon for Stethoscope Safety

Now however, Avossi Medical, a NY-based medical products manufacturer has come up with a unique and simple solution to this problem. They have developed a disposable sleeve or cover that will cover not just the chest-piece but the whole stethoscope. The so-called full coverage stethoscope sleeve, sold by its tradename StethoMitt, is first of its kind in the market and made of familiar non-woven polypropylene material which provides a good barrier to fluids and microbes. It is of a similar material as that used in surgical gowns and masks. The device is inexpensive enough to be used as a single-use, disposable sleeve, and takes only seconds to put on and take off. Best of all, being a full-coverage device, it not only protects both the healthcare provider and the patients, but allows the providers to use their favorite stethoscope without compromising on acoustic qualities. Avossi is making this product available through an online store and also through its marketing representatives.

Stethoscope Sleeves – Must for Some Areas

Although, every healthcare workers using stethoscopes should take precautions of cleaning the stethoscope or using more convenient alternatives such as StethoMitt, these should be a required item for any trauma center, infectious disease center or in emergency rooms and emergency workers.

Perhaps it may be not too long before such sleeves could be as commonplace as examination gloves!

Comments are off for this post

Monitoring Instruments

Jul 13 2020 Published by under Uncategorized

Monitoring is a concept incorporated in machines and the idea was implemented to enhance the physical capabilities of man. It involves automatic control and the monitoring equipment adapts to changing circumstances and performs as specified, varying as per performance and applications. Various equipments are developed; examples are air quality monitoring equipment, condition monitoring instruments, environment monitoring equipment, and the list is endless.

Condition monitoring instruments can vary from offline instruments to online instruments and these equipments perform as data collector, analyzer, balance, etc. as per the given function and machine condition parameters. Usually, the online condition monitoring instruments provide data and allied solutions at the entry level for production of critical standard machines. Whether it is condition monitoring, predictive maintenance with measurement of vibration, bearing condition, signal analysis, balancing, sound intensity, machine diagnosis, trending, RPM, FFT, spectrum, sound power, etc., condition monitoring instruments have full application. Industries where condition monitoring instruments are used include cement, chemicals, fertilizers, petrochemicals, power generation, refinery, rubber, turbines, iron & steel, marine, paper & pulp, and many more sectors.

Air quality monitoring holds immense importance in industrial houses, workplaces, manufacturing units, warehouses, food processing centers and even those related to the environment. An air quality monitoring program initiated by the Central Pollution Control Board (CPCB) in 1984 well validates the implementation of air quality monitoring throughout the country. In the process, a number of monitoring stations has cropped up across the country and these are engaged in assorted activities like selection of pollutants, measurement methods, sampling and more utilizing specific air quality monitoring equipments.

Manufacturers across the world have fabricated environment monitoring instruments drawn fine to measure flow, particulates, and other factors in environments. Consistent surveillance of the environment as well as detection of any deterioration in the same are facilitated by environment monitoring equipment. The detection of humidity, high temperature, corrosive climatic conditions and more enables power stations and other industries to take corrective action against damages that may occur to sensitive computers, instrumentation systems, electrical equipments, etc. Combustion analysers, indoor air quality monitoring instruments, industrial hygiene instruments, lab & healthcare instruments and ventilation test instruments are a partial list of the many environment monitoring instruments used the world over.

If you want that your environment monitoring equipment performs as specified in the warranty sheet given along with, you may still be not convinced whether it will satisfy your requirements. Minimizing time spent on maintenance, calibration and other factors like portability, durability, high performance, etc. can be signed for if the brand as well as the supplier is taken into consideration. Brand and market expertise are two very determinants that validate the quality and performance levels.

Procuring of the environment monitoring instruments and allied products is not the only end; on-site maintenance, support from your supplier also matters. Further if your condition monitoring instruments perform without any breakdowns for long that would save on your time. The buyer should consider few factors in mind before buying the aforementioned products; these are quality, precision, construction, measurement accuracy, warranty, innovative features, etc.

Comments are off for this post

Ten Important Lessons From the History of Mergers & Acquisitions

Jul 13 2020 Published by under Uncategorized

The history of mergers and acquisitions in the United States is comprised of a series of five distinct waves of activity. Each wave occurred at a different time, and each exhibited some unique characteristics related to the nature of the activity, the sources of funding for the activity, and to some extent, differing levels of success from wave to wave. When the volume, nature, mechanisms, and outcomes of these transactions are viewed in an objective historical context, important lessons emerge.

 

The First Wave

The first substantial wave of merger and acquisition activity in the United States occurred between 1898 and 1904. The normal level of about 70 mergers per year leaped to 303 in 1898, and crested at 1,208 in 1899. It remained at more than 300 every year until 1903, when it dropped to 142, and dropped back again into what had been a range of normalcy for the period, with 79 mergers, in 1904. Industries comprising the bulk of activity during this first wave of acquisition and merger activity included primary metals, fabricated metal products, transportation equipment, machinery, petroleum products, bituminous coal, chemicals, and food products. By far, the greatest motivation for these actions was the expansion of the business into adjacent markets. In fact, 78% of the mergers and acquisitions occurring during this period resulted in horizontal expansion, and another 9.7% involved both horizontal and vertical integration.

 

During this era in American history, the business environment related to mergers and acquisitions was much less regulated and much more dynamic than it is today. There was very little by way of antitrust impediments, with few laws and even less enforcement. 

 

The Second Wave

The second wave of merger and acquisition activity in American businesses occurred between 1916 and 1929. Having become more concerned about the rampant growth of mergers and acquisitions during the first wave, the United States Congress was much more wary about such activities by the time the second wave rolled around. Business monopolies resulting from the first wave produced some market abuses, and a set of business practices that were viewed as unfair by the American public. Even the Sherman Act proved to be relatively ineffective as a deterrent of monopolistic practices, and so Congress passed another piece of legislation entitled the Clayton Act to reinforce the Sherman Act in 1914. The Clayton Act was somewhat more effective, and proved to be particularly useful to the Federal Government in the late 1900s. However, during this second wave of activity in the years spanning 1926 to 1930, a total of 4,600 mergers and acquisitions occurred. The industries with greatest concentrations of these activities included primary metals, petroleum products, chemicals, transportation equipment, and food products. The upshot of all of these consolidations was that 12,000 companies disappeared, and more than $13 billion in assets were acquired (17.5% of the country’s total manufacturing assets).

 

The nature of the businesses formed was somewhat different in the second wave; there was a higher incidence of mergers and acquisitions to achieve vertical integration in the second wave, and a much higher percentage of the resulting businesses resulted in conglomerates that included previously unrelated businesses.  The second wave of acquisition and merger activity in the United States ended in the stock market crash on October 29, 1929, and this altered – perhaps forever – the perspective of investment bankers related to funding these transactions. Companies that grew to prominence through the second wave of mergers and acquisitions in the United States, and that still operate in this country today, include General Motors, IBM, John Deere (now Deere & Company), and Union Carbide. 

The Third Wave

The American economy during the last half of the 1960s (1965 through 1970) was booming, and the growth of corporate mergers and acquisitions, especially related to conglomeration, was unprecedented. It was this economic boom that painted the backdrop for the third wave of mergers and acquisitions in American history. A peculiar feature of this period was the relatively common practice of companies targeting acquisitions that were larger than themselves. This period is sometimes referred to as the conglomerate merger period, owing in large measure to the fact that acquisitions of companies with over $100 million in assets spiked so dramatically. Compared to the years preceding the third wave, mergers and acquisitions of companies this size occurred far less frequently. Between 1948 and 1960, for example, they averaged 1.3 per year. Between 1967 and 1969, however, there were 75 of them – averaging 25 per year.  During the third wave, the FTC reports, 80% of the mergers that occurred were conglomerate transactions. 

 

Although the most recognized conglomerate names from this period were huge corporations such as Litton Industries, ITT and LTV, many small and medium size companies attempted to pursue an avenue of diversification. The diversification involved here included not only product lines, but also the industries in which these companies chose to participate. As a result, most of the companies involved in these activities moved substantially outside of what had been regarded as their core businesses, very often with deleterious results. 

 

It is important to understand the difference between a diversified company, which is a company with some subsidiaries in other industries, but a majority of its production or services within one industry category, and a conglomerate, which conducts its business in multiple industries, without any real adherence to a single primary industry base. Boeing, which primarily produces aircraft and missiles, has diversified by moving into areas such as Exostar, an online exchange for Aerospace & Defense companies. However, ITT has conglomerated, with industry leadership positions in electronic components, defense electronics & services, fluid technology, and motion & flow control. While the bulk of companies merged or acquired in the long string of activity resulting in the current Boeing Company were almost all aerospace & defense companies, the acquisitions of ITT were far more diverse. In fact, just since becoming an independent company in 1995, ITT has acquired Goulds Pumps, Kaman Sciences, Stanford Telecom and C&K Components, among other companies.

 

Since the ascension of the third wave of mergers and acquisitions in the 1960s, there has been a great deal of pressure from stockholders for company growth. With the only comparatively easy path to that growth being the path of conglomeration, a lot of companies pursued it. That pursuit was funded differently in this third wave of activity, however. It was not financed by the investment bankers that had sponsored the two previous events. With the economy in expansion, interest rates were comparatively high and the criteria for obtaining credit also became more demanding. This wave of merger and acquisition activity, then, was executed by the issuance of stock. Financing the activities through the use of stock avoided tax liability in some cases, and the resulting acquisition pushed up earnings per share even though the acquiring company was paying a premium for the stock of the acquired firm, using its own stock as the currency.

The use of this mechanism to boost EPS, however, becomes unsustainable as larger and larger companies are involved, because the underlying assumption in the application of this mechanism is that the P/E ratio of the (larger) acquiring company will transfer to the entire base of stock of the newly combined enterprise. Larger acquisitions represent larger percentages of the combined enterprise, and the market is generally less willing to give the new enterprise the benefit of that doubt. Eventually, when a large number of merger and acquisition activities occur that are founded on this mechanism, the pool of suitable acquisition candidates is depleted, and the activity declines. That decline is largely responsible for the end of the third wave of merger and acquisition activity. 

One other mechanism that was used in a similar way, and with a similar result, in the third wave or merger and acquisition activity was the issue of convertible debentures (debt securities that are convertible into common stock), in order to gather in the earnings of the acquired firm without being required to reflect an increase in the number of shares of common stock outstanding. The resulting bump in visible EPS was known as the bootstrap effect. Over the course of my own career, I have often heard of similar tactics referred to as “creative accounting”. 

 

Almost certainly, the most conclusive evidence that the bulk of conglomeration activity achieved through mergers and acquisitions is harmful to overall company value is the fact that so many of them are later sold or divested. For example, more than 60% of cross-industry acquisitions that occurred between 1970 and 1982 were sold or divested in some other manner by 1989. The widespread failure of most conglomerations has certainly been partly the result of overpaying for acquired companies, but the fact is that overpaying is the unfortunate practice of many companies. In one recent interview I conducted with an extremely successful CEO in the healthcare industry, I asked him what actions he would most strongly recommend that others avoid when entering into a merger or acquisition. His response was immediate and emphatic: “Don’t become enamored with the acquisition target”, he replied. “Otherwise you will overpay. The acquisition has to make sense on several levels, including price.” 

 

The failure of conglomeration, then, springs largely from another root cause. Based on my own experience and the research I have conducted, I am reasonably certain that the most fundamental cause is the nature of conglomeration management. Implicit in the management of conglomerates is the notion that management can be done well in the absence of specialized industry knowledge, and that just isn’t usually the case. Regardless of the “professional management” business curricula offered by many institutions of higher learning these days, in most cases there is just no substitute for industry-specific experience. 

            

The Fourth Wave

The first indications that a fourth wave of merger and acquisition activity was imminent appeared in 1981, with a near doubling of the value of these transactions from the prior year. However, the surge receded a bit, and really regained serious momentum again in 1984.   According to Mergerstat Review (2001), just over $44 billion was paid in merger and acquisition transactions in 1980 (representing 1,889 transactions), compared to more than $82 billion (representing 2,395 transactions) in 1981. While activity fell back to between $50 billion and $75 billion in the ensuing two years, the 1984 activity represented over $122 billion and 2,543 transactions. In terms of peaks, the number of transactions peaked in 1986 at 3,336 transactions, and the dollar volume peaked in 1988 at more than $246 billion. The entire wave of activity, then, is regarded by analysts to have occurred between 1981 and 1990. 

 

There are a number of aspects of this fourth wave that distinguish it from prior activities. The first of those characteristics is the advent of the hostile takeover. While hostile takeovers have been around since the early 1900s, they truly proliferated (more in terms of dollars than in terms of percent of transactions) during this fourth wave of merger and acquisition activity. In 1989, for example, more than three times as many dollars were transacted as a result of contested tender offers than the dollars associated with uncontested offers. Some of this phenomenon was closely tied to another characteristic of the fourth wave of activity; the sheer size and industry prominence of acquisition targets during that period. Referring again to Mergerstat Review‘s numbers published in 2001, the average purchase price paid in merger and acquisition transactions in 1970, for example, was $9.8 million. By 1975, it had grown to $13.9 million, and by 1980 it was $49.8 million. At its peak in 1988, the average purchase price paid in mergers and acquisitions was $215.1 million.   Exacerbating the situation was the volume of large transactions. The number of transactions valued at more than $100 million increased by more than 23 times between 1974 and 1986, which was a stark contrast to the typically small-to-medium size company based activities of the 1960s.

 

Another factor that impacted this fourth wave of merger and acquisition activity in the United States was the advent of deregulation. Industries such as banking and petroleum were directly affected, as was the airline industry.   Between 1981 and 1989, five of the ten largest acquisitions involved a company in the petroleum industry – as an acquirer, an acquisition, or both. These included the 1984 acquisition of Gulf Oil by Chevron ($13.3 billion), the acquisition in that same year of Getty Oil by Texaco ($10.1 billion), the acquisition of Standard Oil of Ohio by British Petroleum in 1987 ($7.8 billion), and the acquisition of Marathon Oil by US Steel in 1981 ($6.6 billion).  Increased competition in the airline industry resulted in a severe deterioration in the financial performance of some carriers, as the airline industry became deregulated and air fares became exposed to competitive pricing.

 

An additional look at the ontology of the ten largest acquisitions between 1981 and 1989 reflects that relatively few of them were acquisitions that extended the acquiring company’s business into other industries than their core business. For example, among the five oil-related acquisitions, only two of them (DuPont’s acquisition of Conoco and US Steel’s acquisition of Marathon Oil) were out-of-industry expansions. Even in these cases, one might argue that they are “adjacent industry” expansions. Other acquisitions among the top ten were Bristol Meyers’ $12.5 billion acquisition of Squibb (same industry – Pharmaceuticals), and Campeau’s $6.5 billion acquisition of Federated Stores (same industry – Retail). 

 

The final noteworthy aspect of the “top 10” list from our fourth wave of acquisitions is the characteristic that is exemplified by the actions of Kohlberg Kravis. Kohlberg Kravis performed two of these ten acquisitions (both the largest – RJR Nabisco for $5.1 billion, and Beatrice for $6.2 billion). Kohlberg Kravis was representative of what came to be known during the fourth wave as the “corporate raider”. Corporate raiders such as Paul Bilzerian, who eventually acquired the Singer Corporation in 1988 after participating in numerous previous “raids”, made fortunes for themselves by attempting corporate takeovers. Oddly, the takeovers did not have to be ultimately successful for the raider to profit from it; they merely had to drive up the price of shares they acquired as a part of the takeover attempt. In many cases, the raiders were actually paid off (this was called “greenmail”) with corporate assets in exchange for the stock they had acquired in the attempted takeover. 

 

Another term that came into the lexicon of the business community during this fourth wave of acquisition and merger activity is the leveraged buy-out, or LBO. Kohlberg Kravis helped develop and popularize the LBO concept by creating a series of limited partnerships to acquire various corporations, which they deemed to be underperforming. In most cases, Kohlberg Kravis financed up to ten percent of the acquisition price with its own capital and borrowed the remainder through bank loans and by issuing high-yield bonds. Usually, the target company’s management was allowed to retain an equity interest, in order to provide a financial incentive for them to approve of the takeover.

 

The bank loans and bonds used the tangible and intangible assets of the target company as collateral. Because the bondholders only received their interest and principal payments after the banks were repaid, these bonds were riskier than investment grade bonds in the event of default or bankruptcy. As a result, these instruments became known as “junk bonds.” Investment banks such as Drexel Burnham Lambert, led by Michael Milken, helped raise money for leveraged buyouts. Following the acquisition, Kohlberg Kravis would help restructure the company, sell off underperforming assets, and implement cost-cutting measures. After achieving these efficiencies, the company was usually then resold at a significant profit.

 

Increasingly, as one reviews the waves of acquisition and merger activity that have occurred in the United States, this much seems clear: While it is possible to profit from the creative use of financial instruments and from the clever buying and selling of companies managed as an investment portfolio, the real and sustainable growth in company value that is available through acquisitions and mergers comes from improving the newly formed enterprise’s overall operating efficiency. Sustainable growth results from leveraging enterprise-wide assets after the merger or acquisition has occurred. That improvement in asset efficiency and leverage is most frequently achieved when management has a fundamental commitment to the ultimate success of the business, and is not motivated purely by a quick, temporary escalation in stock price. This is related, in my view, to the earlier observation that some industry-specific knowledge improves the likelihood of success as a new business is acquired. People who are committed to the long-term success of a company tend to pay more attention to the details of their business, and to broader scope of technologies and trends within their industry.  

 

There were a few other characteristics of the fourth wave of merger and acquisition activity that should be mentioned before moving on. First of all, the fourth wave saw the first significant effort by investment bankers and management consultants of various types to provide advice to acquisition and merger candidates, in order to earn professional fees. In the case of the investment bankers, there was an additional opportunity around financing these transactions. This opportunity gave rise, in large measure, to the junk bond market that raised capital for acquisitions and raids. Secondly, the nature of the acquisition – and especially the nature of takeovers – became more intricate and strategic in nature. Both the takeover mechanisms and paths and the defensive, anti-takeover methods and tools (eg: the “poison pill”) became increasingly sophisticated during the fourth wave. 

 

The third characteristic in this category of “other unique characteristics” in the fourth wave was the increased reliance on the part of acquiring companies on debt, and perhaps even more importantly, on large amounts of debt, to finance the acquisition. A significant rise in management team acquisition of their own firms using comparatively large quantities of debt gave rise to a new term – the leveraged buy-out (or LBO) – in the lexicon of the Wall Street analyst. 

 

The fourth characteristic was the advent of the international acquisition. Certainly, the acquisition of Standard Oil by British Petroleum for $7.8 billion in 1987 marked a change in the American business landscape, signaling a widening of the merger and acquisition landscape to encompass foreign buyers and foreign acquisition targets. This deal is significant not only because it involved foreign ownership of what had been considered a bedrock American company, but also because of the sheer dollar volume involved. A number of factors were involved in this event, such as the fall of the US dollar against foreign currencies (making US investments more attractive), and the evolution of the global marketplace where goods and services had become increasingly multinational in scope. 

 

The Fifth Wave

The fifth wave of acquisition and merger activity began immediately following the American economic recession of 1991 and 1992. The fifth wave is viewed by some observers as still ongoing, with the obvious interruption surrounding the tragic events September 11, 2001, and the recovery period immediately following those events. Others would say that it ended there, and after the couple of years ensuing, we are seeing the imminent rise of a sixth wave. Having no strong bias toward either view, for purposes of our discussion here I will adopt the first position. Based on the value of transactions announced over the course of the respective calendar years, the dollar volume of total mergers and acquisitions in the US in 1993 was $347.7 billion (an increase from $216.9 billion in 2002), continued to grow steadily to $734.6 billion in 1995, and expanded still further to $2,073.2 billion by 2000.    

 

This group of deals differed from the previous waves in several respects, but arguably the most important difference was that the acquisitions and mergers of the 1990s were more thoughtfully orchestrated than in any previous foray. They were more strategic in nature, and better aligned with what appeared to be relatively sophisticated strategic planning on the part of the acquiring company. This characteristic seems to have solidified as a primary feature of major merger and acquisition activity, at least in the US, which is encouraging for shareholders looking for sustainable growth rather than a quick – but temporary – bump in share price. 

 

A second characteristic of the fifth wave of acquisitions and mergers is that they were typically more equity-based than debt-based in terms of their funding. In many cases, this worked out well because it relied less on leverage that required near-term repayment, enabling the new enterprise to be more careful and deliberate about the sell-off of assets in order to service debt created by the acquisition.  

 

Even in cases where both of these features were prominent aspects of the deal, however, not all have been successful. In fact, some of the biggest acquisitions have been the biggest disappointments over recent years. For example, just before the announcement of the acquisition of Time Warner by AOL, a share of AOL common stock traded for about $94. In January of 2005, that share of stock was worth about $17.50. In the Spring of 2003, the average share price was more like $11.50. The AOL Time Warner merger was financed with AOL stock, and when the expected synergies did not materialize, market capitalization and shareholder value both tanked. What was not foreseen was the devaluation of the AOL shares used to finance the purchase. As analyst Frank Pellegrini reported in Time’s on-line edition on April 25, 2002: “Sticking out of AOL Time Warner’s rather humdrum earnings report Wednesday was a very gaudy number: A one-time loss of $54 billion. It’s the largest spill of red ink, dollar for dollar, in U.S. corporate history and nearly two-thirds of the company’s current stock-market value.” 

The fifth wave has also become known as the wave of the “roll-up”. A roll-up is a process that consolidates a fragmented industry through a series of acquisitions by comparatively large companies (typically already within that industry) called consolidators. While the most widely recognized of these roll-ups occurred in the funeral industry, office products retailers, and floral products, there were roll-ups of significant magnitude in other industries such as discrete segments of the aerospace & defense community. 

 

Finally, the fifth wave of acquisitions and mergers was the first one in which a very large percentage of the total global activity occurred outside of the United States. In 1990, the volume of transactions in the US was $301.3 billion, while the UK had $99.3 billion, Canada had $25.3 billion, and Japan represented $14.2 billion. By the year 2000, the tide was shifting. While the US still led with $2,073 billion, the UK had escalated to $473.7 billion, Canada had grown to $230.2 billion, and Japan had reached $108.8 billion. By 2005, it was clear that participation in global merger and acquisition activity was now anyone’s turf. According to barternews.com: “There was incredible growth globally in the M&A arena last year, with record-setting volume of $474.3 billion coming from the Asian-Pacific region, up 46% from $324.5 billion in 2004. In the U.S., M&A volume rose 30% from $886.2 billion in 2004. In Europe the figure was 49% higher than the $729.5 billion in 2004. Activity in Eastern Europe nearly doubled to a record $117.4 billion.” 

 

The Lessons of History

Many studies have been conducted that focus on historical mergers and acquisitions, and a great deal has been published on this topic. Most of the focus of these studies has been on more contemporary transactions, probably owing to factors such as the availability of detailed information, and a presumed increase in the relevance of more recent activity. However, before sifting through the collective wisdom of the legion of more contemporary studies, I think it’s important to look at least briefly to the patterns of history that are reflected earlier in this article.

 

Casting a view backward over this long history of mergers and acquisitions then, observing the relative successes and failures, and the distinctive characteristics of each wave of activity, what lessons can be learned that could improve the chances of success in future M&A activity?  Here are ten of my own observations:

  1. Silver bullets and statistics. The successes and failures that we have reviewed through the course of this chapter reveal that virtually any type of merger or acquisition is subject to incompetence of execution, and to ultimate failure. There is no combination of market segments, management approaches, financial backing, or environmental factors that can guarantee success. While there is no “silver bullet” that can guarantee success, there are approaches, tools, and circumstances that serve to heighten or diminish the statistical probability of achieving sustainable long-term growth through an acquisition or merger.
  2. The ACL Life Cycle is fundamental. The companies who achieve sustainable growth using acquisitions and mergers as a mainstay of their business strategy are those that move deliberately through the Acquisition / Commonization / Leverage (ACL) Life Cycle. We saw evidence of that activity in the case of US Steel, Allied Chemical, and others over the course of this review.
  3. Integration failure often spells disaster. Failure to achieve enterprise-wide leverage through the commonization of fundamental business processes and their supporting systems can leave even the largest and most established companies vulnerable to defeat in the marketplace over time. We saw a number of examples of this situation, with the American Sugar Refining Company perhaps the most representative of the group.
  4. Environmental factors are critical. As we saw in our review of the first wave, factors such as the emergence of a robust transportation system and strong, resilient manufacturing processes enabled the success of many industrial mergers and acquisitions. So it was more recently with the advent of information systems and the Internet. Effective strategic planning in general, and effective due diligence specifically, should always include a thorough understanding of the business environment and market trends. Often times, acquiring executives become enamored with the acquisition target (as mentioned in our review of third wave activity), and ignore contextual issues as well as fundamental business issues that should be warning signs.
  5. Conglomeration is challenging. There were repeated examples of the challenges associated with conglomeration in our review of the history of mergers and acquisitions in the United States. While it is possible to survive – and even thrive – as a conglomerate, the odds are substantially against it. Those acquisitions and mergers that most often succeed in achieving sustainable long-term growth are the ones involving management with significant industry-specific and process-specific expertise. Remember the observation, during the course of our review of fourth wave activity, that “the most conclusive evidence that the bulk of conglomeration activity achieved through mergers and acquisitions is harmful to overall company value is the fact that so many of them are later sold or divested.”
  6. Commonality holds value. Achieving significant commonality in fundamental business processes and the information systems that support them offers an opportunity for genuine synergy, and erects a substantive barrier against competitive forces in the marketplace. We saw this a number of times; Allied Chemical is especially illustrative. 
  7. Objectivity is important. As we saw in our review of the influence of investment bankers vetoing questionable deals during second wave activities, there is considerable value in the counsel of objective outsiders. A well-suited advisor will not only bring a clear head and fresh eyes to the table, but will often introduce important evaluative expertise as a result of experience with other similar transactions, both inside and outside of the industry involved.
  8. Clarity is critical. We saw the importance of clarity around the expected impacts of business decisions in our review of the application of the DuPont Model and similar tools that enabled the ascension of General Motors. Applying similar methods and tools can provide valuable insights about what financial results may be expected as the result of proposed acquisition or merger transactions.
  9. Creative accounting is a mirage. The kind of creative accounting described by another author as “finance gimmickry” in our review of third wave activity does not generate sustainable value in the enterprise, and in fact, can prove devastating to companies who use it as a basis for their merger or acquisition activity.
  10. Prudence is important when selecting financial instruments to fund M&A transactions. We observed a number of cases where inflated stock values, high-interest debt instruments, and other questionable choices resulted in tremendous devaluation in the resulting enterprise. Perhaps the most illustrative example was the recent AOL Time Warner merger described in the review of fifth wave activity.

Many of these lessons from history are closely related, and tend to reinforce one another. Together, they provide an important framework of understanding about what types of acquisitions and mergers are most likely to succeed, what methods and tools are likely to be most useful, and what actions are most likely to diminish the company’s capability for sustainable growth following the M&A transaction.

Comments are off for this post