Macronutrient Balance

Unlike the micronutrients, the macronutrients (proteins, fats and carbohydrates) all contribute to dietary energy intake. Alcohol can also contribute to dietary energy. The effect of alcohol on health outcomes has been reviewed elsewhere and will not be revisited here except to say that alcohol intakes below about 5% of dietary energy are recommended ( NHMRC 1999, 2003). For a given energy intake, increases in the proportion of one macronutrient necessarily involves a decrease in the proportion of one, or more, of the other macronutrients. Thus, for example, a high fat diet is usually relatively low in carbohydrate and vice versa and a high protein diet is relatively low in carbohydrate and/or fat.

There is a growing body of evidence that a major imbalance in the relative proportions of macronutrients can increase risk of chronic disease and may adversely affect micronutrient intake. However, the form of fat (eg saturated, polyunsaturated or monounsaturated or specific fatty acids) or carbohydrate (eg starches or sugars; high or low glycaemic) is also a major consideration in determining the optimal balance in terms of chronic disease risk. This has not always been given enough consideration in study design or interpretation.

There appears to be quite a wide range of relative intakes of proteins, carbohydrates and fats that are acceptable in terms of chronic disease risk. The risk of chronic disease (as well as the risk of inadequate micronutrient intake) may increase outside these ranges, but often data in free-living populations are limited at these extremes of intake. The Food and Nutrition Board of the Institute of Medicine in constructing the US:Canadian Dietary Reference Intakes (FNB:IOM 2002) called this range the Acceptable Macronutrient Distribution Range (AMDR). In their document, they extensively reviewed the current evidence, in terms of outcomes such as body weight maintenance, obesity, CHD and LDL oxidation, stroke, Type 2 diabetes, hyperinsulinaemia and glucose tolerance, metabolic syndrome, cancer, osteoporosis, renal failure, renal stones, inflammatory disorders and risk of nutrient inadequacy in adults, as well as some of these outcomes, plus birth weight and growth in relation to children. Much of the evidence is based on epidemiological studies with clinical endpoints but these studies generally show associations rather than causality and are often confounded by other factors that can affect chronic disease outcomes.

Randomised controlled trials, which provide the most conclusive evidence of causality, are often lacking in relation to optimising macronutrient profile. Studies of individual macronutrients are particularly prone to confounding by the other necessary changes to the diet (ie either the energy content changes in the control group and/or the proportion of other macronutrients). For example, in assessing the effects of a high carbohydrate diet on a specific endpoint, the test diet must be relatively low in fat and/or protein and/or vary in its energy content. If a benefit or adverse effect is seen, it is not immediately clear what is responsible for the observed outcome.

Given these limitations, an expert review of the evidence base described in the US:Canadian DRI review, together with consideration of papers published since the review, and dietary modelling to assess the effects of changes in macronutrients on micronutrients, was used to develop AMDRs for use with adults in Australia and New Zealand. It is important to remember that these recommendations are recommendations for otherwise healthy people and it is assumed that usual dietary intake will be at a level to maintain current body weight (ie these are not necessarily recommendations for optimal weight loss diets or for treatment or management of existing chronic disease conditions).

Dietary modelling involved two approaches. Firstly, an assessment was undertaken of 2-day adjusted, daily diets reported in the 1995 National Nutrition Survey for Australia (ABS 1998) in relation to macronutrient profile, energy intake and EARs (or a proportion of the AIs) for all nutrients except sodium, fluoride, biotin, selenium, choline, chromium, iodine and molybdenum, for which reliable analytical food data were not available. For modelling purposes, vitamin D was also excluded as much of this can be accessed through the action of sunlight on skin. For those nutrients where an AI was set, a value of 83% AI was used in modelling as this gave a rough equivalence to the relativity between the EAR and RDI (ie it is 2CV below the AI assuming a CV of 10% for the EAR, as used to derive RDIs where the variability in requirements is unknown). It is recognised that the National Nutrition Survey data were based on 24-hour recall and as such do not assess usual dietary intake in individuals. In this instance, however, the data were being used only as examples of one-day intakes actually consumed by individuals in the community, albeit they may not be typical of the individual’s usual intake (ie examples of real as opposed to simulated or designed daily intakes).

The second approach used linear programming to assess whether it was possible to design diets that conformed to the EARs and AIs as outlined above, for varying macronutrient and total energy intake profiles.

Where an RDI or AI had been set for one of the macronutrients (eg for protein or selected fatty acids), this has generally been used as the bottom end of the AMDR range for that nutrient, unless dietary modelling showed this to be problematic.


Low intakes of protein have been investigated in relation to impaired immune function and growth, as well as to low birth weight. Although protein malnutrition is uncommon in Australia and New Zealand, world wide, in conjunction with energy deficiency, it is responsible for more than half the deaths of young children (Pelletier et al 1995). In individuals with protein-energy malnutrition (PEM) immune responses are impaired (Keusch 1990), low intakes in pregnancy are correlated with a higher incidence of low birth weight (King 2000) and low intakes in early childhood result in stunting and wasting (Waterlow 1976).

In the US:Canadian Dietary Reference Intake review, the lower level of the AMDR was set at the level of the RDI (or Recommended Dietary Allowance in the US and Canada). This equates to about 10–11% of energy from protein. However, dietary modelling using linear modelling with commonly consumed foods, has shown that it is not possible to design diets based on commonly eaten foods at 10% energy from protein that reach the EARs for the micronutrients at energy intakes below about 15,000 kJ/day. Assessment of the one-day diets from the 1995 National Nutrition Survey of Australia (ABS 1998) confirmed this finding. Of the 10,852 adults in this survey, only six subjects with diets in the range of 10–11% energy on the day of the survey, conformed to their age/gender EARs. All were men and all had energy intakes in excess of 15,000 kJ/day. All but two had saturated fat intakes at 13% or above, the other two having added sugar intakes of 26 and 43% energy, which effectively diluted the per cent energy from protein. Both the analysis of the National Nutrition Survey and the linear modelling of diets indicated that protein intakes in the range of at least 15% energy from protein were required for most people to attain the EARs for micronutrients, especially at energy intakes below 15,000 kJ/day.

High protein intakes have been assessed in relation to a number of chronic diseases including cancer, renal disease, obesity, coronary artery disease and osteoporosis, however, the evidence is not convincing. In relation to cancer, no clear role for protein has emerged. For breast cancer, some studies have shown an effect (Hislop et al 1986, Lubin et al 1981, 1986, Toniolo et al 1994) while others have either shown none (Miller et al 1978, Phillips 1975) or a slight inverse effect (Decarli et al 1997). For other cancers such as lung (Lei et al 1996), oral and pharynx (Franceschi et al 1999), oesophageal (Gao et al 1994), and non-Hodgkin lymphoma (Chiu et al 1996, Ward et al 1994), no relationship was found. Indeed, Barbone et al (1993), Franceschi et al (1999) and Gao et al (1994) showed an inverse effect. High protein intake has, however, been shown to relate to upper digestive tract cancer (de Stephani et al 1999) and kidney cancer (Chow et al 1994).

Despite a clearly documented effect of protein on urinary calcium loss under controlled conditions, the evidence is inconsistent that within populations, individuals consuming self-selected diets with higher protein content have lower bone mass and/or increased fracture risk. This is hardly surprising since protein intake is only one of many factors, both dietary and non-dietary, that influence bone metabolism. Moreover, the assessment of many of these factors, including long term-dietary intake, in free-living individuals is not only difficult but also imprecise. Studies that address the influence of protein on bone status are included in the Appendix section. The general conclusion to be reached from these studies is that both low and high protein intakes may be detrimental to bone health and that diets containing moderate levels of protein (1.0–1.5 g/kg are probably optimal for bone health (Kerstetter et al 2003).

Heaney (1998) suggested that one reason why protein intake does not always adversely affect bone is because in self-selected diets, increased protein intake is often associated with increased calcium intake. In consequence, it is likely to be more informative to evaluate diets not on their protein content alone but on their calcium to protein ratio. On the basis of the 1997 US calcium recommendations for middle-aged women, Heaney proposed a ratio of calcium to protein (mg to g) of 20 to 1. In a review of data on protein intake and BMD and/or fracture risk in elderly women (Bell & Whiting 2002), however, mean calcium to protein ratios of 15–17:1 (mg:g) were associated with both increased and decreased fracture risk. A measure of net acid excretion, such as the dietary protein to potassium ratio (Frassetto et al 1998), is likely to be a better predictor of urinary acid excretion than protein intake per se. Whiting et al (2002) have also observed that not only protein, but also potassium and phosphorus were significant predictors of BMD in men with adequate calcium intakes. In 1995, except for children ages 2–8 years, 10% or less of the Australian population consumed diets with calcium to protein ratios of >15.0 and it is likely that the same is true for the population of New Zealand.

High protein intakes have also been investigated in relation to adverse renal outcomes. Elite Australian male athletes are known to have a daily protein intake over 1.5 g/kg (Burke et al 1991). In healthy male athletes who consumed long term daily protein intakes of up to 2.8 g protein/kg body weight, no negative effects on renal function were found, as indicated by glomerular filtration rate and by albumin and calcium excretion rates (Poortmans & Dellalieux 2000). In this Belgian study, the two groups of athletes investigated were body-builders and other well trained athletes with high and medium protein intake, respectively. The athletes underwent a 7-day nutrition record analysis as well as blood sample and urine collection to determine the potential renal consequences of a high protein diet. The body builders, who included protein supplements in their diet, on average consumed 16,335 ± 1,153 kJ/day and 169 ± 13 g of protein/day or 1.92 ± 0.13 g protein/kg/day. This group of trained athletes who consumed a high protein diet showed no evidence of short term renal stress. There is no published evidence that a diet containing up to 2.8 g protein/kg/day produces adverse effects on kidney metabolism in athletes. In addition, no known association of protein intake with progressive renal insufficiency has been determined (Brandle et al 1996).

Although in animal models, high protein diets have been shown to cause hyperlipidaemia and arteriosclerosis, there is no evidence of this in man. Indeed in the Nurses’ cohort study, protein intake was found to be inversely related to risk of CVD. The range of actual protein intake was, however, limited (Hu et al 1999) and a moderate relative intake (in terms of per cent energy) appeared to be almost as beneficial as a high intake (above 25% energy) when compared to intakes below 15% of energy. A number of studies have shown protein to be more satiating than fat or carbohydrate, but some have shown a positive correlation between protein intake and body fatness, body mass index or skinfold thickness (Buemann et al 1995; Rolland-Cachera et al 1995). On the other hand, a 6 month randomised trial demonstrated that replacing carbohydrate with protein improved weight loss as part of a fat-reduced diet (Skov et al 1999).

In the US:Canadian DRI review, in the light of the lack of consistent data on the effect of protein on chronic disease, the upper level of the AMDR for protein was simply set “to complement the AMDR for fat and carbohydrate”, giving an upper limit of 35% energy from protein. However, there is very limited information about the longer-term effects of diets in which protein provides >25% energy. Average usual intakes within the range 25-35% energy from protein are not reported in western populations, even in athletes. Reports of diets in which the per cent energy from protein is within this range tend to come from populations in Arctic regions, from pastoralists and hunter-gatherer groups, most frequently in circumstances under which energy intake is restricted (Speth 1989), rather than at times of ad libitum food intake.

In the laboratory study by McClellan et al (1930a,b) in which two men lived on a meat diet for a year without apparent ill effects (although calcium balance was negative), the per cent energy from protein ranged between 15 and 24%, except during a brief period when one of the men was asked to consume only lean meat (44% energy from protein). Within two days, this diet led to gastrointestinal disturbances, which resolved on resumption of the former diet. Similar symptoms are characteristic of the initial stages of ‘rabbit poisoning’ and were also seen briefly in two out of six subjects in whom nitrogen intake from a liquid formula diet was increased from 12 g to 36 g/day while energy intake remained constant but per cent energy from protein increased from about 10 to 30% (Oddoye & Margen 1979). Whether these symptoms would persist over the longer term is not known.

An analysis of the National Nutrition Survey of Australia showed that on the day of the survey, only 1.4% of subjects (n=152) had intakes at, or greater than, 30% protein and only 4.4% (n=480) were above 25% protein. Of those above 30% protein, none conformed to the EARs. Of those with protein intakes between 25 and 30% of energy, there were nine males who conformed to the EARs, with energy intakes ranging from 9,000-24,000 kJ/day (median 17,000 kJ/day) but all except one (at 15,000 kJ energy intake) also had saturated fat intakes well above 10% energy.

Linear modelling showed that it is possible to design diets of varying energy levels that conform to the EARs at protein intakes of 25–30% energy. However, given the lack of data about long term health effects of higher protein diets in largely sedentary western societies such as Australia and New Zealand, it would seem prudent to suggest an upper limit of 25% energy from protein for the general population, whilst recognising that for some highly active communities or certain individuals, higher intakes may be consistent with good health.

In conclusion, whilst diets as low as 10% of energy from protein will provide the protein required for maintenance and replacement of body tissues and for the necessary functional and structural proteins required by the body, intakes at or above 15% protein appear to be required for ensuring that the EARs for micronutrients are met, particularly for people with energy requirements below about 15,000 kJ/day. It should be remembered, however, that the EARs are average requirements that, by definition, will be more than is physiologically required by half the individuals in the population. Similarly, whilst some highly active, apparently healthy, populations living in Arctic regions or living as pastoralists or hunter-gatherers appear to have diets in the region of 30% protein or more, this population level of intake is not seen in any western, largely sedentary, societies such as Australia and New Zealand, so that potential long-term adverse effects in this lifestyle environment, are unknown. A Working Party convened by the FAO in 1997 recommended that protein intakes be limited to no more than 2 g/kg/day for the general population (Durnin et al 1999). This would equate to about 150 g/day of protein for the standard man and about 120 g/day for the standard woman or about 22–25% as energy using median population energy intakes. Until more is known about the long term effects of high protein diets in the context of the dominant lifestyles of western societies, a prudent upper level may therefore be 25% energy from protein, which is also equivalent to the current 95th centile of intake in Australia and New Zealand.


The recommendations for total fat and total carbohydrates in relation to their contribution to total dietary energy are intimately related, as it is generally the balance of fat and carbohydrates in diets that has been studied in relation to chronic disease outcomes.

The FNB:IOM (2002) review concluded that the optimal range for total fat was from 20–35% energy. At this level, the risk for obesity, CHD and diabetes could be minimised whilst allowing for sufficient intake of essential nutrients and keeping saturated fats at moderate levels. In making their assessment, the FNB:IOM (2002) looked not only at total amounts of fats but also at the various types of fats.

In assessing the role of total fat in relation to maintenance of body weight, Sonko et al (1994) concluded that 15% fat was too low to maintain body weight in women and Jequier (1999) showed that 18% fat is adequate, even with high physical activity. Some, apparently healthy Asian communities have been reported to consume diets as low as 10% fat (Weisburger, 1988) but they also have short stature which may result from this low level of fat intake. For diets that are very low in total fat, the intake of essential fatty acids and fat-soluble vitamins (vitamins A, D, E and K) may also be compromised. Because of the types of foods that are often limited in very low fat diets (eg certain meats and dairy products), intakes of micronutrients such as zinc and iron as well as riboflavin, calcium and vitamin B12 may also be affected.

In the Australian National Nutrition Survey, only 7% of subjects had intakes on the day of the survey below 20% of energy from total fat, with only 2% being below 15% energy from fat. There were three men and one woman who had fat intakes from 19–21% who conformed to all of the EARs assessed. Three had energy intakes in the order of 8,000–9,000 kJ and one had an intake just above 15,000 kJ. In these subjects, protein intakes ranged from 17–22% of energy. Their saturated fat and added sugar intakes were also less than 10% energy. Dietary modelling also showed it was possible to design diets at 20% energy from total fat that would meet all other nutritional requirements. Below this level of energy from total fat it was more difficult to do so unless total energy intake was high. Considering all the above, a lower intake limit of 20% energy as fat seems prudent.

Epidemiological studies give mixed results in relation to whether high fat diets predispose to overweight or obesity and promote weight gain. However, intervention studies have shown that when fat intakes are relatively high, many individuals consume additional energy and gain weight, although this is often as much associated with changes in energy density in the diets as with fat per se (Glueck et al 1982, Lawton et al 1993, Lissner et al 1987, Poppit & Swann 1998, Poppitt et al 1998, Prosperi et al 1997, Stubbs et al 1995b, Thomas et al 1992, Tremblay et al 1989, 1991). Inappropriate weight gain can worsen the metabolic consequences of obesity, particularly the risk of CHD. High fat diets are often, although not always (eg Mediterranean diet), accompanied by high saturated fat intake and through this mechanism, can raise plasma LDL and further increase CHD risk. A meta-analysis of intervention studies by Yu-Poth et al (1999) showed that reduction in plasma cholesterol and LDL cholesterol was significantly correlated with reductions in per cent total fat, but that these also included a decrease in per cent saturated fat. Some case-control studies have shown an association between total fat and CHD risk, but it is difficult to disentangle the effects of the saturated fat. Consumption of diets high in fat (42 or 50%) has also been shown to increase blood concentration of the prothrombin markers, blood coagulation factor VII and activated factor VII (Bladbjerg et al 1994, Larsen et al 1997) which are related to increased risk of CHD.

Dietary modelling with commonly consumed foods shows that if all fat consumed is low in saturated fat (ie 20% of fat energy), a 35% fat diet would provide about 7% of total energy as saturated fat. Consuming a variety of fats will increase this level of saturated fatty acids. Thus if total fat exceeds about 35% of energy, for most people, it will be difficult to avoid high intakes of saturated fat.

Several studies have reported associations between higher fat intakes and increased insulin resistance as indicated by high fasting insulin concentrations, impaired glucose tolerance or impaired insulin sensitivity (Lovejoy & DiGirolamo 1992, Marshall et al 1991, Mayer et al 1993) as well as the development of Type 2 diabetes (West & Kalbfleisch 1971). However, other studies have not shown these associations (Coulston et al 1983, Liu et al 1983, Salmeron et al 2001). It is possible that the association seen in some studies was confounded by factors such as obesity and glycaemic index.

Epidemiological studies show inconsistent links between per cent energy from fat and cancer risk. One meta-analysis of 23 studies of breast cancer and fat gave RR values of 1.01 and 1.21 from cohort and case-control studies, respectively, for people with higher fat intakes. Howe et al (1997) could show no association between fat intake and colorectal cancer from a combined analysis of 13 case-control studies, and Smith-Warner et al (2002) could show no associations between intakes of total or specific types of fat and lung cancer risk among never, past, or current smokers. However, a meta-analysis by Huncharek & Kopelnick (2001) showed that high total fat intake was associated with a 24% increased risk of development of ovarian cancer across eight observational studies. With these conflicting results, it is difficult to use cancer outcome as a determinant for the UL.

Thus, in relation to its potential influence on body weight and its cardiovascular complications, and in agreement with the US:Canadian DRI review, a UL of 35% energy as fat, is recommended for the general population. This is approximately equivalent to the 60th centile of intakes reported in the latest Australian and New Zealand National Surveys for adults (ie at least 60% of subjects currently have intakes at or below 35% fat as energy).

In the Australian National Nutrition Survey, there were 40 people on the day of the survey who had fat intakes in the range of 34–36% energy who conformed to all of the EARs assessed. About 80% of these subjects were men with energy intakes ranging from 9,000–46,000 kJ/day on the day of survey. Only 8 subjects had energy intakes of less than 13,000 kJ/day and 12 had intakes over 19,000 kJ/day. Most of these subjects had saturated fats above 10% energy and protein intakes between 13% and 22% of energy. Added sugars were generally low. Dietary modelling showed it was possible to design diets that conformed to all the EARs within this range of per cent energy as fat but which also had acceptable levels of saturated fats. It is possible that a UL of 30% fat might bring additional benefits to some people, but the data delineating the benefits of 30% compared to 35% energy as fat are limited.

Saturated and trans fatty acids

Whilst the main focus of this section relates to the relative contribution of total fat to energy intake, it is widely acknowledged that the type of fat consumed is equally important in certain chronic disease conditions, notably heart disease.

There have been hundreds of studies of saturated fat intake in relation to serum cholesterol levels including both total cholesterol and LDL cholesterol. Regression analyses have shown that for each 1% increase in energy from saturated fats, serum LDL cholesterol will increase between 0.33 mmol/L and 0.045 mmol/L (Clarke et al 1997, Hegsted et al 1993, Mensink & Katan 1992). There is, in turn, a positive linear relationship between serum total and LDL cholesterol concentration and risk of CHD (Jousilahti et al 1998, Neaton & Wentworth 1992, Sorkin et al 1992, Stamler et al 1986, Weijenberger et al 1996). It has been estimated that a 10% reduction in serum cholesterol concentrations would reduce CHD mortality by 20% (Jousilahti et al 1998), although the studies on which these estimates were based were undertaken using pharmaceutical, not dietary, interventions. Whether dietary intervention would bring about equivalent lowering of CHD mortality is unknown.

Trans fatty acids (TFAs) are unsaturated fatty acids that have at least one double bond in the trans configuration. A trans double bond occurs between two carbon atoms that have changed geometry relative to the cis double bonds found most commonly in nature. The presence of a trans, relative to a cis, double bond results in acyl chains that can pack together more tightly, producing a fat with a higher melting point. TFAs are produced by partial hydrogenation of unsaturated oils during the manufacture of margarine and shortening but also occur naturally, in small amounts, in some ruminant animal foods. They have been shown to elevate LDL cholesterol and lower the beneficial HDL cholesterol (Aro et al 1997, Ascherio et al 1999, Judd et al 1994, 1998, Louheranta et al 1999, Muller et al 1998, Nestel et al 1992, Noakes & Clifton 1998, Seppanen-Laakso et al 1993, Sundram et al 1997). In a 20-year follow up of a large cohort of women trans fat intake was associated with an elevated risk of CHD (RR = 1.33, 95% CI: 1.07, 1.66; p(trend) = 0.01). The associations between intakes of trans-fat with CHD risk were most evident among women younger than age 65 years (Oh et al 2005).

There is good evidence that on a weight for weight basis, TFAs have a more adverse effect on CVD risk compared to saturated fatty acids (Ascherio et al 1999). However, quantitatively, dietary intake of TFA is substantially less than saturated fatty acid intake. The adipose tissue level of TFAs predicts heart disease even after adjustment for total cholesterol. It has been proposed that TFAs may adversely affect endothelial function as intake was positively related to concentrations of inflammatory markers (Lopez-Garcia et al 2005). The WHO in its report on diet, nutrition and chronic disease (WHO 2003) recommended that TFAs comprise no more than 1% of total dietary energy.

Whilst any increase in saturated and trans fats is associated with detrimental effects on markers of CHD risk, it would be impossible to consume a diet with no saturated fats that would provide all the other nutrient needs. Taking into account the nature of the food supply and the needs for fat in the diet, a combined limit of 8–10% of energy from saturated and trans fats together would be prudent.

n-3 and n-6 fatty acids

Some fatty acids are essential in the diet and also have potential effects on the aetiology of chronic disease. These include some of the polyunsaturated n-6 and n-3 fatty acids, such as linoleic acid (LA), a-linolenic acid (ALA) and the long chain omega-3s (DHA, EPA and DPA).

Recent findings in large prospective cohort studies appear to confirm the earlier controlled intervention trials carried out in hospital-based populations (Dayton & Pearce 1969, Turpeinen et al 1979) that polyunsaturated fatty acids, predominantly LA, are associated with reduced incidence and mortality from CHD. A 15-year follow-up of Finnish men found energy-adjusted consumption of LA to be linked to reduced cardiovascular mortality (RR = 0.39) (Laaksonen et al 2005).

In the 20-year follow-up of the Nurses' Health Study that included a total of 5,672 women and 1,766 cases of clinical CHD, the RR attributable to polyunsaturated fat consumption was 0.75 (highest versus lowest quintile of intakes; p >0.001) (Oh et al 2005). In this same cohort, the RR was even lower in overweight younger women (<65 years), and in those women who developed Type 2 diabetes or had Type 2 diabetes initially; the dietary polyunsaturated to saturated ratio was associated with significantly lower cardiovascular mortality over 18 years (Tanaescu et al 2004). These data are supported by evidence that plasma LA concentrations are inversely correlated with clinical CHD (Kris-Etherton et al 2004).

The lower end of the range of recommended intake for these fatty acids is set at the AI for each fatty acid type. The upper bound of recommended intake was set for linoleic acid and for alpha-linolenic acid at the current 90th centile of intake in the community expressed as per cent energy, as human data about additional benefits in relation to chronic disease outcome are currently limited for levels much in excess of these limits, and these levels of intake do not appear to cause harm. For n-6 fatty acids there is also some evidence from human studies showing that enrichment of lipoproteins and cell membranes with n-6 PUFA contributes to an adverse pro-oxidant state (Abbey et al 1993, Berry et al 1991, Bonanome et al 1992, Louheranta et al 1996, Reavan et al 1991, 1993, 1994), suggesting caution in recommending levels above 10% of dietary energy.

For LC n-3 fatty acids, an SDT was set at the 90th centile of intake. In the last decade, there has been an exponential rise in publications on health benefits of omega-3 PUFAs, particularly the longer chain omega-3s, EPA, DPA and DHA. Various expert groups have made consensus recommendations for consumption of ALA and/or the very long chain omega-3s, based on estimates of dietary requirement. Even though they may take account of the same body of published evidence, there is considerable variation between expert interpretations, consequent recommendations and their adoption by health authorities (Bahri et al 2002, BNF Task Force 1992, de-Deckere et al 1998, Department of Health 1994, FNB:IOM 2002, Health and Welfare Canada 1990, Health Council of the Netherlands 2001, Kris-Etherton et al 2002, Ministry of Health Labor and Welfare 1999, National Heart Foundation 1999, Nettleton 2003, NHMRC 1992, Nordic Council of Ministers 1996, Scientific Advisory Committee on Nutrition 2002, Simopoulos et al 1999, US FDA 2000, WHO 2003). It is apparent from the scientific literature that raising omega-3 intakes above current median levels (and thus above AI) may affords a wide range of health benefits. The evidence is strongest for reduction of CVD risk by EPA and DHA (WHO 2003).

The US Food and Drug Administration (US FDA 2000), when considering whether to allow an omega-3 health claim related to CHD, undertook a thorough evaluation of existing evidence for cardiovascular benefits of increased EPA and DHA consumption in humans. The evidence comprised epidemiological studies of fish consumption and intervention trials with EPA- or DHA-rich fish oil supplements. While the former were typically representative of a normal population, the latter were undertaken in subjects with pre-existing CVD. Hence, although there was strong overall evidence of benefit, the FDA originally ruled that cardiovascular benefits of EPA and DHA had not been proven in a normal population. This limitation was expressed in the resultant health claim, which attributed decreased risk of CVD to consumption of fish but not specifically to its omega-3 content. Following recent revision, the claim now refers to omega-3 intake (US FDA 2003).

There is a lack of dose-response data relating EPA and DHA consumption to chronic disease health benefit. However, it is becoming increasingly common to relate the outcomes of epidemiological studies to estimates of EPA and DHA intakes or to plasma or erythrocyte EPA and DHA levels in each sector of the population, rather than to fish intakes. The Nurses' Health Study followed about 80,000 healthy women for up to 14 years and found that those in the highest quintile of EPA and DHA intake (about 480 mg/day) had a significantly lower risk of both CHD and thrombotic stroke (Hu et al 2002, Iso et al 2001). There was also a significantly lower risk of CHD at the highest (1.4 g/day) versus lowest (0.7 g/day) ALA intake (Hu et al 1999). This is consistent with the earlier MRFIT trial (about 13,000 men followed for 10.5 years) in which the risk of both CHD and total CVD were significantly lower at high ALA (1.6 g/day) and EPA and DHA (660 mg/day) intakes (Dolecek 1992).

The US Physicians’ Health Study reported a reduction in sudden death in men consuming fish at least once weekly (90–160 mg EPA + DHA/day) (Albert et al 1998). Subsequent evaluation confirmed a tight inverse relationship between sudden death and blood EPA and DHA levels (Albert et al 2002). In contrast, the Health Professionals' Follow-Up Study reported no effect of EPA and DHA on CHD risk in men (Ascherio et al 1995). Re-analysis of this study, however, showed significant reduction of ischaemic stroke with increasing consumption of fish (He et al 2002).

Fish consumption has also been shown to counteract CV mortality in quintiles of a healthy aging population consuming at least 267 mg/day of EPA and DHA, whereas eating fish low in EPA and DHA gave no benefit (Mozaffarian et al 2003). The benefit correlated with increased plasma phospholipid EPA and DHA. The recent observation that heart rate is inversely correlated with both fish intake and erythrocyte DHA levels in about 10,000 healthy men (Dallongeville et al 2003) is consistent with an earlier study relating fish consumption and platelet DHA to heart rate variability (Christensen et al 1997) and a case-control study equating increased fish intake (an extra 0.2 g omega-3/day) with increased erythrocyte EPA and DHA levels and a 50% reduction in risk of primary cardiac arrest (Siscovick et al 1995).

The major epidemiological trials are supported by a rapidly increasing number of intervention trials reporting benefits of increased EPA and DHA consumption on both hard end-points and surrogate biomarkers for a variety of health conditions ranging from CVD to inflammatory disease, behavioural disorders and cancer. The most significant of these have been intervention trials post-myocardial infarction (MI) such as GISSI-P (GISSI-Prevenzione Investigators 1999) and DART (Burr et al 1989) showing reductions of CHD and particularly sudden death with fish oil supplementation. In the DART study, however, longer-term follow-up showed that the early reduction in all-cause mortality observed in those given fish oil advice was followed by an increased risk over the next 3 years, leading to the conclusion that the advice had no clear effect on coronary or all-cause mortality. The risk of stroke death was also increased in the fish oil advice group – the overall unadjusted hazard was 2.03 (Ness et al 2002).

Although one might expect that the dose needed to demonstrate significant benefit in a clinical trial would exceed the threshold intake for long-term efficacy, a substantial reduction of sudden death was achieved in the GISSI-P trial with only 850 mg EPA + DHA/day. This dose also reduced plasma triglycerides, the most consistent index of CV response to EPA and DHA. A subsequent post-MI intervention trial using a 4-fold higher dose of the same supplement failed to show a benefit (Nilsen et al 2001). However, this may have been due to the high habitual EPA and DHA consumption of the Norwegian subjects. Fish oil supplementation has also been shown to regress coronary artery disease (von Schacky et al 1999) and to stabilise atherosclerotic plaques (Thies et al 2003), but attempts to demonstrate prevention of restenosis following angioplasty have been inconclusive.

There is increasing awareness of the role of inflammatory mechanisms in the development of arterial disease (Osterud & Bjorklid 2003). While there is substantive evidence that omega-3 supplementation can counteract chronic inflammatory disorders such as rheumatoid arthritis, intervention trials have indicated the need for intakes well in excess of dietary levels (Calder 2001). However, plasma TNF (tumour necrosis factor) receptor levels are inversely related to dietary EPA and DHA intake (Pischon et al 2003) and recently it has been shown that inflammatory mediators, TNFa and interleukin-6 (IL-6), may be suppressed at more modest intakes of EPA and DHA of about 0.3–1.0 g/day (Trebble et al 2003, Wallace et al 2003).

It would be unnecessarily repetitive to include an exhaustive review and appraisal of the evidence for the added health benefits of increased dietary EPA and DHA consumption. It is already the subject of numerous critical reviews, several of which have been published subsequent to the FNB:IOM (2002) report. These include the AHA Statement (Kris-Etherton et al 2002), a WHO report (WHO 2003) and a report by the Scientific Advisory Committee on Nutrition (2002). There have been several Cochrane reviews on relationships between fish oil, or n-3 fats, and asthma (Woods et al 2002), schizophrenia (Joy et al 2003), cystic fibrosis (Beckles et al 2002) and CVD (Hooper et al 2004). The latter is incomplete and the others are inconclusive. On the other hand, the WHO report, which classifies the quality of currently available evidence according to the NHMRC’s preferred criteria, concludes that the relationship between EPA and DHA and cardiovascular disease is convincing (WHO 2003). In summary, there is increasing acceptance of evidence that, in populations with only modest intakes of EPA and DHA, increased dietary consumption could further improve health status.

Given this body of evidence and the modest intakes currently consumed in Australia and New Zealand, it would seem prudent to encourage increased consumption of LC n-3 fatty acids (DHA, EPA and DPA). Dietary intakes at the current 90th centile in the population would seem to provide potential benefit whilst being a safe level currently consumed by many Australians and New Zealanders. Rounding up to the nearest 10 mg, this equates to 610 mg/day for men and 430 mg/day for women. For men, the current 90th centile is close to the upper quintile from the MRFIT study which was associated with significantly less CVD (Dolecek 1992) and for women, the current 90th centile of intake is close to the level shown to produce benefit in the Nurses Health Study (Iso et al 2001).

This level is also consistent with the recently revised NHMRC Dietary Guidelines for Australians ( NHMRC 2003) which recommend increasing the LC omega-3 fat intake to about 400 mg/day. In this context, a total intake of 0.2% energy, or about 0.6 g/day for men and 0.4 g/day for women, is reasonable. It is also consistent with current National Heart Foundation advice (NHF 1999) to eat at least two fish meals per week (preferably oily fish) which is equivalent to about 430–570 mg/day.


The AMDR for carbohydrate intake recommended by the FNB:IOM in adults and children is 45–65% of dietary energy intake (FNB:IOM 2002). The intakes were based on the IOM interpretation that there is an increased risk for CHD at high carbohydrate intakes (>65%) and increased risk of obesity with low carbohydrate, high fat intakes (<45%).

The FNB:IOM report did not consider in any great depth the nature of the carbohydrate when setting their AMDR. Added sugars were considered separately, otherwise the structure and polysaccharide composition of plant-based foods were not considered. Consideration of the nature of dietary carbohydrate is justified on the basis of associations with important chronic diseases such as Type 2 diabetes and CHD(Fung et al 2002, Jacobs et al 1998, Liu et al 2000, Meyer et al 2000). New occurrence of these diseases is more likely to be associated with the nature of carbohydrate, rather than percentage of daily energy intake provided by all carbohydrate-containing foods. The US:Canadian review used CHD and obesity as the limiting conditions when setting their upper and lower bounds of carbohydrate intake, respectively. However, it could be argued that consideration of aspects of optimal glucose metabolism, including the nature of dietary carbohydrate, may be of equal or greater relevance in the setting of an AMDR for carbohydrate. Insulin resistance and impaired glucose tolerance are major risk factors for Type 2 diabetes and CHD.

Lower bound

The evidence reviewed by the FNB:IOM suggests that energy density, rather than a particular mix of fuels, leads to obesity. Although a high fat diet will be energy dense, the fat component alone will not lead to obesity unless energy is chronically consumed in excess of energy expenditure. This argument also applies to carbohydrates. In many western countries, the relative fat consumption (as a percentage of energy intake) has been declining over the last three decades (United States Department of Agriculture 1998). However, total fat consumption expressed as grams per day, has either remained relatively constant or dropped only slightly from the mid 1980s. The apparent discrepancy can be explained by an increasing energy intake due to higher carbohydrate intake. In Australia, between the 1983 and 1995 National Dietary Surveys (Cook et al 2001), total carbohydrate intake in adults increased by some 16–17%. About two-thirds of this increase was due to increased starch intake and one-third to sugar (both natural and added) intake. In children, between 1985 and 1995, total carbohydrate intake increased by about 20%, with starches increasing 18% and sugars about 20%. During this decade alone, the mean intake of non-alcoholic beverages (soft drinks, fruit and vegetable juices and mineral waters) for children rose nearly 50% in boys and 30% in girls.

The type of carbohydrate can markedly influence energy density of the diet. For example, it is easier to increase the energy density of the diet by consuming energy dense drinks with added carbohydrates compared to cereal foods, vegetables or fruits containing carbohydrates, because the extra energy intake from the former source is poorly compensated (Mattes 1996). In an experiment comparing drinks containing either sucrose or artificial sweeteners consumed by overweight people for 10 weeks, increases in body weight and fat mass occurred in the sucrose group compared with the artificial sweetener group (Raben et al 2002) as there was little or no energy compensation through reduction in intake of other energy sources.

Diets typified as low energy density contain a large amount of bulk in the form of fresh fruits, vegetables, whole grains and pulses and minimal fat, whereas a high energy-dense diet generally contains low bulk foods with higher sucrose and fat contents (Duncan et al 1983). In a crossover design, ad libitum daily energy intake on the low energy-dense diet was one-half of the energy intake on the high energy-dense diet. In a review of the effect of differing carbohydrate and fat intakes on energy balance, it was concluded that the lower energy density of carbohydrate foods on average is likely to lead to a lower ad libitum energy intake than a higher fat diet (Blundell & Stubs 1999). A dietary pattern typified as a ‘white bread’ diet (53.6% carbohydrate and 31.4% fat as a percentage of energy intake) was associated with a higher mean annual change in waist circumference compared with a ‘healthy’ diet (61.9% carbohydrate, 24.8% fat) in which the intake of white bread and refined grains was one-fifth (Newby et al 2003).

The FNB:IOM (2002) publication suggests that the lower limit of energy intake from carbohydrate should be 45%, leaving 55% of energy to come from protein and fat and possibly alcohol. Foods high in protein and fat are typically low bulk having a high energy density and energy intake from alcohol is poorly compensated. It is possible that the lower bound of 45% energy from carbohydrate may be too low to optimise reductions in energy intake associated with low energy-dense, high bulk foods, but the evidence is limited at this stage. However, the considerations described indicate that the form of carbohydrate is of key importance. Thus, for intakes at the lower end of the carbohydrate intake range, most of the carbohydrate has to be sourced from low energy-dense sources such as wholegrain cereals, vegetables, legumes and fruits, which are mostly low glycaemic index foods.

An analysis of the NNS survey showed that just under half of the population had intakes at or above 45% of energy as carbohydrate on the day of the survey. Dietary modelling also showed that it is possible to construct diets at 45% energy from carbohydrate that conform to the EARs for the nutrients assessed. About half the subjects from the NNS who conformed to all of the EARs assessed had carbohydrate intakes at or above 45% of energy.

Upper bound

The rationale behind a high carbohydrate intake posing an increased risk for CHD is a worsening of the lipid profile (lower HDL and/or higher triglycerides) when comparing high and low carbohydrate diets. This effect is seen in some of the studies reviewed by the FNB:IOM (2002) with the effect being most pronounced when mono-unsaturated fatty acids formed a high proportion of the fat intake (Garg et al 1994, Grundy et al 1988). However, a high carbohydrate diet usually lowers total and LDL cholesterol concentrations relative to a high fat diet and, depending on the nature of the carbohydrate, improvements in the LDL:HDL ratio have been found with no raising of triglycerides compared with high fat diets (Turley et al 1998, Vidon et al 2001). It is difficult to judge the relevance of dietary-induced blood lipid changes on chronic disease because there are no clinical trials comparing a high carbohydrate diet with a high fat diet on coronary events (Sacks & Katan 2002). Even against the background of raised triglycerides whilst on high carbohydrate diets, flow-mediated vasodilation and LDL particle size did not differ from those with higher fat diets ( de Roos et al 2001, Kasim-Karakas et al 1997).

Contrary to some of the studies discussed in the FNB:IOMDRIs review indicating that high carbohydrate diets may lower HDL or adversely affect triglycerides, there is some evidence that a high carbohydrate diet rich in complex carbohydrates derived from fruit, vegetables, grains and legumes may improve certain risk factors for heart disease. Further evidence that a consideration of the nature of carbohydrates is important in this context is found when considering the results of a study by Marckmann et al (2000) which showed that a high carbohydrate, high sucrose diet raised triglycerides compared with a high fat diet, whereas a high carbohydrate, low sucrose diet was associated with lower triglycerides. In the DASH trial, triglyceride concentrations were lowered in people having initially high concentrations after partial replacement of carbohydrates from a ‘typical American’ diet with fruit and vegetables (Obarzanek et al 2001). A meta-analysis of the effect of non-soya pulses on blood lipids found pulse consumption was associated with improved blood lipids including lower triglycerides and higher HDL cholesterol concentrations (Anderson & Major 2002). A change from a 70% carbohydrate diet to a 45% carbohydrate diet in South African prisoners resulted in a rise in serum triglycerides when the additional fat was butter or partially-hydrogenated oil and no change when sunflower seed oil was used (Antonis & Bersohn 1961). A switch back to a 70% carbohydrate diet resulted in a transient rise in triglycerides for 4–6 weeks followed by a gradual decline back to baseline levels. Unfortunately the nature of the carbohydrate portion of the diet was not well described. However, a diet high in unrefined foods, that provided about 68% of energy as carbohydrates lowered total cholesterol without changing triglycerides and improved fasting glucose concentrations, insulin sensitivity and glucose disposal (Fukagawa et al 1990).

It is clear that the nature of the fat and the carbohydrate content of the diet affect blood lipid profiles and glucose metabolism. Given these considerations, it is recommended that the upper bound of carbohydrate intake should be set at that required after the obligatory needs of fat and protein are met. In practice, using this approach and given the lower limit of 15% energy set for protein and 20% for fat, the upper bound would be 65%, the same as that recommended by the US:Canadian review, albeit arrived at using a somewhat different approach. The major difference between the two sets of recommendations lies in the emphasis placed in the Australian/New Zealand recommendation on the importance of the source of carbohydrate. Intakes of carbohydrate as high as 65% of energy or more from energy-dense, high glycaemic index sources may be detrimental to overall health. Data from the Third National Health and Examination Surveys (NHANES III) suggest that a high carbohydrate diet (>60% of energy intake) is associated with an elevated risk of metabolic syndrome in men (Park et al 2003). Unfortunately, there was no breakdown of the data by carbohydrate source that would have enabled an examination of the association between the metabolic syndrome and the nature of carbohydrate. Using the same database, Yang and colleagues found that the odds ratio for elevated serum C-peptide concentrations was reduced across quintiles of carbohydrate intake. Adjusting for total and added sugar intake strengthened the inverse association in men, suggesting that the nature of carbohydrate is important in the relationship between carbohydrate intake and elevated C-peptide concentrations (Yang et al 2003).

Presently, dietary recommendations from various countries separate the intakes of sucrose and other added sugars from total carbohydrate intake. There is no consensus as to how much can be included in a healthy diet. Evidence for a role of sucrose and other energy-containing sweeteners in adverse health conditions has been reviewed by the FNB:IOM (FNB:IOM 2002). These areas include behaviour, plasma lipids, CHD, obesity, nutrient density, physical activity, cancer, insulin sensitivity and Type 2 diabetes. Studies of the relationship between added sugars and the various categories listed above is ongoing. The FNB:IOM did not discuss a possible relationship between added sugar-sweetened drinks and bone health in children and adults through the avoidance of more nutrient-dense drinks. Familial conditioning suggests that maternal milk consumption predicts a trade-off between milk and soft drink consumption in the diets of young girls (Fisher et al 2000). Consumption of sweetened soft drinks was associated with a lower consumption of milk and calcium in Spanish children (Rodriguez-Artalejo et al 2003). Women with low milk intake during childhood and adolescence have less bone mass in adulthood and greater risk of fracture (Kalkwarf et al 2003). In another study, high fruit and vegetable intake was associated with higher bone mineral density compared with high intakes of candy (Tucker et al 2002).

The role of added sugars in the aetiology of disease and dental caries has been reviewed in some detail by WHO report on Diet and Chronic Diseases (WHO 2003). The WHO together with a number of countries such as the UK and Germany recommended equal or less than 10% of energy from added sugars, whilst the FNB: IOM document sets the limit at 25% of energy. Dental caries is often identified as the limiting factor in terms of an upper intake of cariogenic sweeteners, even in an era of fluoride exposure. There is no reason to suspect that the cariogenicity of sucrose and other sugars differs according to an individuals’ energy intake. Thus, the dietary intake of sucrose and other cariogenic sugars might best be expressed as an absolute intake (grams per day) rather than as a proportion of energy intake. Indeed form and frequency of consumption also seem to be key indicators of adverse cariogenic outcome. The UL is likely to be less in children with primary dentition than it is for adults. The possible effect of sucrose and high fructose corn syrups in the aetiology of other diseases needs a more thorough review. These sweeteners cannot be treated as just another carbohydrate, because the fructose moiety imparts its own metabolic effect associated with elevated blood triglycerides and impaired glucose tolerance (Vrana & Fabry 1983).

Finally, the impact of sucrose intake on nutrient adequacy may differ between the US and Australia and New Zealand due to differing fortification policies. An example is folate, the intake of which declined strongly as added sugar intake increased in Australian adults (Baghurst et al 1992). This relationship is likely to be less pronounced in the US as certain cereal-based sugary foods such as cakes, biscuits and snack bars are made with folate-fortified flour. Of those who conformed to all of the EARs assessed in the NNS survey, 60% had added sugar intakes at or below 10% energy on the day of the survey and a further 23% had intakes between 11 and 15% of energy.

In summary, one of the key issues in relation to the AMDR recommendations for carbohydrate is that ‘carbohydrate’ is not a homogenous entity. Many epidemiological and dietary intervention studies refer to ‘high carbohydrate’ or ‘low carbohydrate’ diets with little or no description of the nature of the carbohydrate. Apart from considerations related to simple or added sugars, food structure, carbohydrate source and processing can all affect the physiological effects of carbohydrates and the amounts that can be consumed to optimise overall nutrient status and reduce chronic disease risk.