How to Prolong Battery Life

Setting Battery Performance Standards

A battery is a corrosive device that begins to fade the moment it comes off the assembly line.  The stubborn behavior of batteries has left many users in awkward situations. The British Army could have lost the Falklands War in 1982 on account of uncooperative batteries. The officers assumed that a battery would always follow the rigid dictate of the military. Not so. When a key order was given to launch the British missiles, nothing happened. No missiles flew that day. Such battery-induced letdowns are common; some are simply a nuisance and others have serious consequences.
Even with the best of care, a battery only lives for a defined number of years. There is no distinct life span, and the health of a battery rests on its genetic makeup, environmental conditions and user patterns.
Lead acid reaches the end of life when the active material has been consumed on the positive grids; nickel-based batteries lose performance as a result of corrosion; and lithium-ion fades when the transfer of ions slows down for degenerative reasons. Only the supercapacitor achieves a virtually unlimited number of cycles, if this device can be called a battery, but it also has a defined life span.
Battery manufacturers are aware of performance loss over time, but there is a disconnect when educating buyers about the fading effect. Runtimes are always estimated with a perfect battery delivering 100 percent capacity, a condition that only applies when the battery is new.While a dropped phone call on a consumer product because of a weak battery may only inconvenience the cellular user, an unexpected power loss on a medical, military or emergency device can be more devastating.
Consumers have learned to take the advertised battery runtimes in stride. The information means little and there is no enforcement. Perhaps no other specification is as loosely given as that of battery performance. The manufacturers know this and get away with minimal accountability. Very seldom does a user challenge the battery manufacturer for failing to deliver the specified battery performance, even when human lives are at stake. Less critical failures have been debated in court and punished in a harsh way.
The battery is an elusive scapegoat; it’s as if it holds special immunity. Should the battery quit during a critical mission, then this is a situation that was beyond control and could not be prevented. It was an act of God and the fingers point in other directions to assign the blame. Even auditors of quality-control systems shy away from the battery and consider only the physical appearance; state-of-function appears less important to them.

How to Rate Battery Runtime

In the past, the battery industry got away with soft standards specifying battery runtimes. Each manufacturer developed their own method, using the lightest load patterns possible to achieve good figures. This resulted in specifications that bore little resemblance to reality. Under pressure from consumer associations, manufacturers finally agreed to standardized testing procedures. 
The Camera and Imaging Products Association (CIPA) succeeded in developing a standardized battery-life test for digital cameras. Under the test scheme, the camera takes a photo every 30 seconds, half with flash and the other without. The test zooms the lens in and out all the way before every shot and leaves the screen on. After every 10 shots, the camera is turned off for a while and the cycle is repeated. CIPA ratings replicate a realistic way a consumer would use a camera. Most new cameras adapt the CIPA protocol to rate the runtime.
The runtime on laptops is more complex to estimate than a digital camera as programs, type of activity, wireless features and screen brightness affect the load. To take these conditions into account, the computer industry developed a standard called MobileMark 2007. Not everyone agrees with this norm, and opponents say that the convention trims the applications down and ignores real-world habits. The setting of brightness is one example. The monitor is one of the most power-hungry components of a modern laptop and at full brightness the screen delivers 250 to 300 nits. MobileMark uses a setting that is less than half of this. Nor does MobileMark include Wi-Fi and Bluetooth; it leaves these peripherals up to the manufacturers to investigate. BAPCO (Business Applications Performance Corporation), the inventor of MobileMark 2007, is led by Intel and includes laptop and chip manufacturers, such as Advanced Micro Devices.
Cell phone manufacturers face similar challenges when estimating runtimes. Standby and talk time are field-strength dependent and the closer you are to a repeater tower, the lower the transmit power will get and the longer the battery will last. CDMA (Code Division Multiple Access) takes slightly more power than GSM (Global System for Mobile Communications); however, the more critical power guzzlers are large color displays, touch screens, video, web surfing, GPS, camera, voice dialing and Bluetooth. These peripherals drastically shorten the advertised runtime specifications if used frequently.
The insatiable appetite for information and entertainment on the go is devouring the excess energy enjoyed during the past 10 years when we used our cell phones for voice only. Although modern handsets draw considerably less power than older models and the battery capacity has doubled in 12 years, these improvements do not compensate for the modern peripherals, and a new energy crisis is in the making. Figure 1 illustrates the lack of energy with analog cell phones during the 1990s, the sudden excess with the digital phones, and the looming energy shortage when making full use of modern features. These power needs are superimposed on a continuously improving battery.

Power needs of the past, present and future

Figure 1: Power needs of the past, present and future
The capacity of Li-ion has doubled in 12 years and the circuits draw less power; however, these improvements do not compensate for the power demand of the new features, and a new energy crisis is
in the making.
Courtesy of Cadex
Manufacturers of analog two-way radios test the runtime with a scheme called 5-5-90 and 10-10-80. The first number represents the transmit time at high current; the second denotes the receiving mode at a more moderate current; and the third refers to the long standby times between calls at low current. While 5-5-90 simulates the equivalent of a 5-second talk, 5-second receive and 90-seconds standby, the 10-10-80 schedule puts the intervals at a 10-second talk, 10-second receive and 80-second standby. The runtimes of digital two-way radios are measured in a similar way, with the added complexity of tower distance and digital loading requirements that are reminiscent of a cellular phone

How to Define Battery Life

Most new batteries go through a formatting process during which the capacity gradually increases and reaches optimal performance at 100–200 cycles. After this mid-life point, the capacity gradually begins decreasing and the depth of discharge, operating temperatures and charging method govern the speed of capacity loss. The deeper the batteries are discharged and the warmer the ambient temperature is, the shorter the service life. The effect of temperature on the battery can be compared with a jug of milk, which stays fresh longer when refrigerated.
Most portable batteries deliver between 300 and 500 full discharge/charge cycles. Fleet batteries in portable devices normally work well during the first year; however, the confidence in the portable equipment begins to fade after the second and third year, when some batteries begin to lose capacity. New packs are added and in time the battery fleet becomes a jumble of good and failing batteries. That’s when the headaches begin. Unless date stamps or other quality controls are in place, the user has no way of knowing the history of the battery, much less the performance.
The green light on the charger does not reveal the performance of a battery. The charger simply fills the available space to store energy, and “ready” indicates that the battery is full. With age, the available space gradually decreases and the charge time becomes shorter. This can be compared to filling a jug with water. An empty jug takes longer because it can accept more water than one with rocks. Figure 1 shows the “ready” light that often lies.
The “ready” light lies
Figure 1: The “ready” light lies
The “ready” light on a charger only reveals that the battery is fully charged; there is no relationship to performance. A faded battery charges faster than
a good one. Bad batteries gravitate to the top.
Courtesy of Cadex
Many battery users are unaware that weak batteries charge faster than good ones. Low performers gravitate to the top and become available by going to “ready” first. They form a disguised trap when unsuspecting users require a fully charged battery in a hurry. This plays havoc in emergency situations when freshly charged batteries are needed. The operators naturally grab batteries that show ready, presuming they carry the full capacity. Poor battery management is the common cause of system failure, especially during emergencies.
Failures are not foreign in our lives and to reduce breakdowns, regulatory authorities have introduced strict maintenance and calibration guidelines for important machinery and instruments. Although the battery can be an integral part of such equipment, it often escapes scrutiny. The battery as power source is seen as a black box, and for some inspectors correct size, weight and color satisfies the requirements. For the users, however, state-of-function stands above regulatory discipline and arguments arise over what’s more important, performance or satisfying a dogmatic mandate.
Ignoring the performance criteria of a battery nullifies the very reason why quality control is put in place. In defense of the quality auditor, batteries are difficult to check, and to this day there are only a few reliable devices that can check batteries with certainty. Read about Difficulties w

How to Know End-of-Battery-Life

A critical concern among battery users is knowing “readiness” or how much energy a battery has at its disposal at any given moment. While installing a fuel gauge on a diesel engine is simple, estimating the energy reserve of a battery is more complex — we still struggle to read state-of-charge (SoC) with reasonable accuracy. Even if SoC were precise, this information alone has limited benefits without knowing the capacity, the storage capability of a battery. Battery readiness, or state-of-function (SoF), must also include internal resistance, or the “size of pipe” for energy delivery. Figure 1 illustrates the bond between capacity and internal resistance on hand of a fluid-filled container that is being eroded as part of aging; the tap symbolizing the energy delivery.
Relationship of CCA and capacity of a starter battery
Figure 1: Relationship of CCA and capacity of a starter battery
The liquid represents capacity, the leading health indicator;
the tap symbolizes energy delivery or CCA. While the
energy delivery remains strong, the capacity diminishes
with age.
Courtesy Cadex
Most batteries for critical missions feature a monitoring system, and stationary batteries were one of the first to receive supervision in the form of voltage check of individual cells. Some systems also include cell temperature and current measurement. Knowing the voltage drop of each cell at a given load provides cell resistance. Elevated resistance hints to cell failure caused by plate separation, corrosion and other malfunctions. Battery management systems (BMS) are also used in medical equipment, military devices, as well as the electric vehicle. 
Although BMS serves an important role in supervising of batteries, such systems often falls short of expectations and here is why. The BMS device is matched to a new battery and does not adjust well to aging. As the battery gets older, the accuracy goes down and in extreme cases the data becomes meaningless. Most BMS also lack bandwidth in that they only reveal anomalies once the battery performance has dropped to 70 percent. The all-important 70–100 percent operating range is difficult to gauge and the BMS gives the battery a good bill-of-health. This prevents end-of-life prediction in that the operator must wait for the battery to show signs of wear before making a judgment. These shortcomings are not an oversight by the manufacturers, and engineers are trying to overcome them. The problem boils down to technology, or the lack thereof. Over-expectation is common and the user is stunned when stranded with a dead battery. Let’s look how current systems work and examine new technologies.
The most simplistic method to determine end-of-battery-life is by applying a date stamp or observingcycle count. While this may work for military and medical instruments, such a routine is ill suited for commercial applications. A battery with less use has lower wear-and-tear than one in daily operation and to assure reliability of all batteries, the authorities may mandate that all batteries be replaced sooner. A system made to fit all sizes causes good batteries to be discarded too soon, leading to increased operational costs and environment concerns.
Laptops and other portable devices use coulomb counting for SoC readout. The theory goes back 250 years when Charles-Augustin de Coulomb first established the “Coulomb Rule.” Coulomb counting works on the principle of measuring in- and out-flowing current of a battery. If, for example, a battery is charged for one hour at one ampere, the same energy should be available on discharge, but this is not the case. Internal losses and inaccuracies in capturing current flow add to an unwanted tracking error that must be corrected with periodic calibrations.
Calibration occurs naturally when running the equipment down. A full discharge sets the discharge flag,and the subsequent recharge establishes the charge flag (Figure 2). These two markers allow the calculation of state-of-charge by estimating the distance between the flags.
Discharge and charge flags

Figure 2:
Discharge and charge flags
Calibration occurs by applying a full charge, discharge and charge. This can be done in the equipment or externally with a battery analyzer as
part of battery maintenance.
Courtesy Cadex
Coulomb counting should be self-calibrating, but in real life a battery does not always get a full discharge at a steady current. The discharge may be in form of a sharp pulse that is difficult to capture. The battery may then be partially recharged and be stored at high temperature, causing elevated self-discharge that cannot be tracked. To correct the tracking error, a “smart battery” in use should be calibrated once every three months or after 40 partial discharge cycles. This can be done by a deliberate discharge of the equipment or externally with a battery analyzer. Avoid too many intentional deep discharges as this stresses the battery.
Fifty years ago, the Volkswagen Beetle had few battery problems. The only battery management was ensuring that the battery was being charged while driving. Onboard electronics for safety, convenience, comfort and pleasure have added to the demands of the battery in modern cars. For the accessories to function reliably, the battery state-of-charge must be known at all times. This is especially critical with start-stop technologies, a future requirement in European cars to improve fuel economy.
When the engine of a start-stop vehicle turns off at a stoplight, the battery continues to draw 25–50 amperes to feed the lights, ventilators, windshield wipers and other accessories. The battery must have enough charge to crank the engine when the traffic light changes; cranking requires a brief 350A. To reduce engine loading during acceleration, the BMS delays charging for about 10 seconds.
Modern cars are equipped with a battery sensor that measures voltage, current and temperature. Packaged in a small housing and embedded into the positive battery clamp, the electronic battery monitor (EBM) provides a SoC accuracy of about +/–15 percent on a new battery. As the battery ages, the EBM begins to drift and the accuracy drops to 20–30 percent. This can result in a false warning message and some garage mechanics disconnect the EBM on an aging battery to stop annoyances. Disabling the control system responsible for the start-stop function immobilizes engine stop and reduces the legal clean air requirement of the vehicle.
Voltage, current and temperature readings are insufficient to assess battery SoF; the all-important capacity is missing. Until capacity can be measured with confidence on-board of a vehicle, the EBM will not offer reliable battery information. Capacity is the leading health indicator that in most cases determines the end-of-battery-life. Imagine measuring the liquid in a container that is continuously shrinking in size. State-of-charge alone has limited benefit if the storage has shrunk from 100 to 20 percent and this change cannot be measured. Capacity fade may not affect engine cranking and the CCA can remain at a vigorous 70 percent to the end of battery life. Because of reduced energy storage, a low capacity battery charges quickly and has normal vital signs, but failure is imminent. A bi-annual capacity check as part of service can identify low capacity batteries. Battery testers that read capacity are becoming available at garages.
A typical start-stop vehicle goes through about 2,000 micro cycles per year. Test data obtained from automakers and the Cadex laboratories indicate that the battery capacity drops to approximately 60 percent in two years when in a start-stop configuration. The standard flooded lead acid is not robust enough for start-stop, and carmakers use a modified AGM (Absorbent Glass Mat) to attain longer life.
Automakers want to make sure that no driver gets stuck in traffic with a dead battery. To conserve energy when SoC is low, the BMS automatically turns unnecessary accessories off and the motor stays running at a stoplight. Even with this preventive measure, SoC can remain low when commuting in gridlock. Motor idling does not provide much charge and with essential accessories engaged, such as lights and windshield wipers, the net effect could be a small discharge.
Battery monitoring is also important in hybrid vehicles to optimize charge levels. The BMS prevents stressful overcharge above 80 percent and avoids deep discharges below 30 percent SoC. At low charge level, the internal combustion engine engages earlier and is left running for additional charge.
The driver of an electric vehicle (EV) expects similar accuracies on the energy reserve as is possible with a gasoline-powered car. Current technologies do not allow this and some EV drivers might get stuck with an empty battery when the fuel gauge still indicates reserve. Furthermore, the EV driver anticipates that a fully charged battery will travel the same distance, year after year. This is not possible and the range will decrease as the battery fades with age. Distances between charges will also be shorter than normal when driving in cold temperatures because of reduced battery performance.
Some lithium-ion batteries have a very flat discharge curve and the voltage method does not work well to provide SoC in the mid-range. An innovative new technology is being developed that measures battery SoC by magnetic susceptibility. Quantum magnetism (Q-Mag™) detects magnetic changes in the electrolyte and plates that correspond to state-of-charge. This provides accurate SoC detection in the critical 40-70 percent mid-section. More impotently, Q-Mag™ allows measuring SoC while the battery is being charged and is under load.
The lithium iron phosphate battery in Figure 3 shows a clear decrease in relative magnetic field unitswhile discharging and an increase while charging, which relates to SoC. We see no rubber band effect that is typical with the voltage method in which the weight of discharge lowers the terminal voltage and the charge lifts it up. Q-Mag™ also permits improved full-charge detection; however, the system only works with cells in plastic, foil or aluminum enclosures. Ferrous metals inhibit the magnetic field.
Magnetic field measurements of a lithium iron phosphate during charge and discharge

Figure 3: Magnetic field measurements of a lithium iron phosphate during charge and discharge
Relative magnetic field units provide accurate state-of-charge of lithium- and lead-based batteries.
Courtesy of Cadex (2011)
Q-Mag™ also works with lead acid. This opens the door to monitor starter batteries in vehicles. Figure 4 illustrates the Q-Mag™ sensor installed in close proximity to the negative plate. Knowing the precise state-of-charge at any given moment optimizes charge methods and identifies battery deficiencies, including the end-of-battery-life with on-board capacity estimations.
Q-Mag™ sensor installed on the side of a starter battery

Figure 4: Q-Mag™ sensor installed on the side of a starter battery  
The sensor measures the SoC of a battery by magnetic susceptibility. When discharging a lead acid battery, the negative plate changes from lead to lead sulfate. Lead sulfate has a different magnetic susceptibility than lead, which a magnetic sensor can measure.
Courtesy of Cadex (2009)
Q-Mag™ is also a candidate to monitor stationary batteries. The sensing mechanism does not need to touch the electrical poles for voltage measurements and this poses an advantage for high-voltage batteries. Furthermore, Q-Mag™ can assist EVs by providing SoF accuracies not possible with conventional BMS. Q-Mag™ may one day assist in the consumer market to test batteries by magnetism. It is conceivable that one day an iPhone or iPad can be placed on a test mat, similar to a charging mat, and read battery SoC and performance.

Summary

As a medical test at the doctor’s office or a weather forecast broadcasted through the media requires multiple data to derive a result, so also can no single instrument fully assess a battery. Several methods are needed. Voltage as medium to estimate SoC works best if the battery type is known and the pack has rested for at least four hours. A charge or discharge falsifies the voltage and the battery needs several hours to neutralize. Temperature also alters the voltage; it is lower when hot and higher when cold. Coulomb counting offers better SoC accuracy but requires periodic calibration. The coulomb method is not immune to battery aging and the accuracy decreases with time. Measuring the internal resistance gives vital battery information but it presents only a snapshot and cannot predict end-of-battery-life. The resistance of most batteries stays low while the capacity fades as part of aging. Measuring battery state-of-charge with magnetic susceptibility is promising but the technology is only in research laboratories. In time, new technologies will evolve that promise clearer insight into the mystery of a battery. This moment cannot come quickly enough.

Battery Failure, Real or Perceived

Battery manufactures use capacityto specify battery performance, and a new battery should have 100 percent. This means that a 2Ah battery should deliver two amperes for one hour. If the battery quits after 30 minutes, then the capacity is only 50 percent. Manufacturers use capacity to specify warranty obligations. Depending on chemistry and application, the warranty threshold is set between 70 and 80 percent of the specified full capacity.
How does the user know when to claim warranty failure on a battery, or when to replace a pack that no longer performs as expected? Battery replacement has been an ongoing problem and the lack of easy-to-use testing procedures is in part to blame. On one hand, an aging battery may be kept too long until it begins affecting operation, while on the other hand perfectly good batteries are being replaced because of equipment problems or operator misapprehension. This commonly occurs with consumer products under warranty, especially cell phones. If the charge on a cell phone does not hold, the user naturally blames the battery when in many cases the fault lies in the device.
Cell phone manufacturers say that 90 percent of batteries returned under warranty have no problem, and tests conducted in the Cadex laboratories confirm this finding. Many storefronts replace the batteries on the faintest complaint, and this frivolous battery return policy costs the manufacturers millions of dollars per year. Unrealistic expectations, perceived performance loss and lack of practical testing equipment contribute to this wasteful battery exchange behavior.
Generous battery replacement policies are not limited to portable equipment alone: one German manufacturer of luxury cars points out that out of 400 starter batteries returned under warranty, 200 are working well and have no problem. Low charge and acid stratification are the most common causes of the apparent failure. This problem is more frequent with large luxury cars featuring power-hungry accessories than with the more basic models. A genuine factory defect is seldom the cause, and a leading European manufacturer of starter batteries says that factory defects cause less than seven percent of the returned warranty batteries. Similar to the cell phone industry, the manufacturer of the starter battery must take responsibility for a problem that may be customer-induced.
Battery failure in Japan is the largest complaint among new owners. Motorists drive an average 13 km (8 miles) per day in congested cities. With the stop-and-go pattern, the battery has little chance to get fully charged and sulfation occurs. North America may be shielded from such battery problems in part because of the long-distance driving. Read more about Sulfation.

Capacity Loss

The energy storage of a battery can be divided into three imaginary segments known as the availableenergy, theempty zonethat can be refilled, and the unusable part, or rock content that has become inactive. Figure 1 illustrates these three sections.
Aging battery


Figure 1: Aging battery
Batteries begin fading from the day they are manufactured. A new battery should deliver 100 percent capacity; most packs in use operate at less.
Courtesy of Cadex
The manufacturer bases the runtime of a device on a battery that performs at 100 percent; most packs in the field operate at less capacity. As time goes on, the performance declines further and the battery gets smaller in terms of energy storage. Most users are unaware of capacity fade and continue to use the battery. A pack should be replaced when the capacity drops to 80 percent; however, the end-of-life threshold can vary according to application, user preference and company policy.
Besides age-related losses, sulfation and grid corrosion are the main killers of lead acid batteries. Sulfation is a thin layer that forms on the negative cell plate if the battery is allowed to dwell in a low state-of-charge. If sulfation is caught in time, an equalizing charge can reverse the condition. Read aboutSulfation. Grid corrosion can be reduced with careful charging and optimization of the float charge. With nickel-based batteries, the so-called rock content is often the result of crystalline formation, also known as “memory,” and a full discharge can sometimes restore the battery. The aging process of lithium-ion is cell oxidation, a process that occurs naturally as part of usage and aging and cannot be reversed. 

Rising Internal Resistance

High battery capacity is of limited use if the pack cannot deliver the stored energy effectively. To supply power, the battery needs low internal resistance. Measured in milliohms (mΩ), resistance is the gatekeeper of the battery; the lower the resistance, the less restriction the pack encounters. This is especially important on heavy loads such as power tools and electric powertrains. High resistance causes the voltage to collapse on a load, triggering an early shutdown. Figure 1 illustrates low and high resistance batteries in the form of free-flowing and restricted taps.
Effects of internal battery resistance
High resistance

Figure 1: Effects of internal battery resistance
A battery with low internal resistance delivers high current on demand. High resistance causes the battery voltage to collapse. The equipment cuts off, leaving energy behind.
Courtesy of Cadex
Lead acid has a very low internal resistance, and the battery responds well to high current bursts that last for a few seconds. Due to inherent sluggishness, however, lead acid does not perform well on a sustained high current discharge and the battery needs a rest to recover. Sulfation and grid corrosion are the main contributor to the rise of the internal resistance. Temperature also affects the resistance; heat lowers it and cold raises it. Heating the battery will momentarily lower the internal resistance to provide extra run time.
Alkaline, carbon-zinc and other primary batteries have a relatively high internal resistance, and this limits its use to low-current applications such as flashlights, remote controls, portable entertainment devices and kitchen clocks. As these batteries discharge, the resistance increases further. This explains the relative short runtime when using alkaline cells in digital cameras. 

Elevating Self-discharge

All batteries are affected by self-discharge. Self-discharge is not a manufacturing defect but a battery characteristic, although poor manufacturing practices and improper handling can increase the problem. Figure 1 illustrates self-discharge in the form of leaking fluid.
Effects of high self-discharge
Figure 1: Effects of high self-discharge
Self-discharge increases with age, cycling and elevated temperature. Discard a battery if the self-discharge reaches 30 percent in 24 hours.
Courtesy of Cadex
The amount of electrical self-discharge varies with battery type and chemistry. Primary cells such as lithium and alkaline retain the stored energy best and can be kept in storage for several years. Among rechargeable batteries, lead acid has the lowest self-discharge and loses only about five percent per month. With age and usage, however, the flooded lead acid builds up sludge in the sediment trap, which causes a soft short when this semi-conductive substance reaches the plates.
Nickel-based rechargeable batteries leak the most and need recharging before use when placing them on the shelf for a few weeks. High-performance nickel-based batteries have a higher self-discharge than the standard versions. Furthermore, self-discharge increases with use and age, and the contributing factors are crystalline formation (memory), permitting the battery to “cook” in the charger or exposing it to repeated harsh deep discharge cycles.
Lithium-ion self-discharges about five percent in the first 24 hours and then loses 1 to 2 percent per month; the protection circuit adds another three percent per month. A faulty separator can lead to a high self-discharge and if critical, the electrical current will generate enough heat that can in extreme cases lead to a thermal breakdown.Table 2 shows the typical self-discharge of battery systems.
Battery System
Estimated Self-discharge
Primary lithium-metal
10% in 5 years
Alkaline
2-3% per year (7-10 years shelf life)
Lead-acid
5% per month
Nickel-based
10-15% in 24h, then 10-15% per month
Lithium-ion
5% in 24h, then 1-2% per month (plus 3% for safety circuit)
Table 2: Percentage of self-discharge in years and month.Primary batteries have considerably less self-discharge than secondary (rechargeable) batteries.
The energy loss is asymptotical, meaning that the self-discharge is highest right after charge and then tapers off. Nickel-based batteries lose 10 to 15 percent of their capacity in the first 24 hours after charge, then 10 to 15 percent per month. Figure 3 shows the typical loss of a nickel-based battery while in storage.
Self-discharge as a function of time

Figure 3: Self-discharge as a function of time
The discharge is highest right after charge and tapers off. The graph shows self-discharge of a nickel-based battery. Lead- and lithium-based systems have a lower self-discharge.
Courtesy of Cadex
The self-discharge on all battery chemistries increases at higher temperature and the rate typically doubles with every 10°C (18°F). A noticeable energy loss occurs if a battery is left in a hot vehicle. High cycle count and aging also increase self-discharge. Nickel-metal-hydride is good for 300-400 cycles, whereas the standard nickel-cadmium lasts for over 1,000 cycles before elevated self-discharge starts interfering with performance. The self-discharge on an older nickel-based battery can get so high that the pack loses its energy through leakage rather than normal use.
Under normal circumstances the self-discharge of Li-ion is reasonably steady throughout its service life; however a full state-of-charge and elevated temperature increase the self-discharge. These very same factors also affect longevity, a phenomenon that applies to most batteries. Table 4 shows the self-discharge per month of Li-ion at various temperatures and state-of-charge. The high self-discharge at full state-of-charge may come as a surprise to many. This explains in part the asymptotical self-discharge characteristic when removing a battery from the charger.
Charge condition
0°C (32°F)
25°C (77°F)
60°C (140°F)
Full charge
40–60% charge
6%
2%
20%
4%
35%
15%
Table 4: Self-discharge per month of Li-ion at various temperatures and state-of-charge
Self-discharge increases with rising temperature and higher SoC. 

Premature Voltage Cut-off

Not all stored battery energy can or should be used on discharge, and some reserve is almost always left behind on purpose after the equipment cuts off. There are several reasons for this.
Most cell phones, laptops and other portable devices turn off when the lithium-ion battery reaches 3V/cell on discharge, and at this point the battery has about five percent capacity left. Manufacturers choose this voltage threshold to enable some time before recharging. This grace period can be several months until self-discharge lowers the voltage to about 2.5V/cell, at which point the protection circuit opens. Most packs become unserviceable when this occurs. See How to Awaken Sleeping Li-ion.
A battery on a hybrid car is seldom fully charged or discharged; most operate between 20 to 80 percent state-of-charge. This is the most effective working bandwidth of a battery; it also delivers the longest service life. A deep discharge causes undue stress, and the charge acceptance above 80 percent diminishes. The emphasis on an electric powertrain and industrial applications is to maximize service life rather than optimize runtime, as it is the case with consumer products.
Power tools and medical devices with high current draw tend to push the battery voltage to an early cut-off. This is especially true if one of the cells has a high internal resistance, or if operating at cold temperature. These batteries may still have ample capacity left after the “cut-off;” discharging them with a battery analyzer at a moderate load will often give a residual capacity of 30 percent. Figure 1 illustrates the cut-off voltage graphically.
Illustration of equipment with high cut-off voltage


Figure 1: Illustration of equipment with high cut-off voltage
Portable devices do not utilize all available battery power and leave some energy behind.
Courtesy of Cadex 
Alkaline batteries are not suitable for high load applications because of elevated internal resistance. Cold temperature and a partially depleted cell cause the internal resistance to rise further. This advances the cut-off and much of the energy is left behind. See Function of Primary Batteries.

No comments:

Post a Comment