Central Air Conditioner Test Procedure Public Workshop: - PowerPoint PPT Presentation

Loading...

PPT – Central Air Conditioner Test Procedure Public Workshop: PowerPoint presentation | free to download - id: 1fb1f-YWU0M



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Central Air Conditioner Test Procedure Public Workshop:

Description:

DEX Median/Mean Plots (Most important factors?) Multi and Subset Plotting (Interactions? ... Design of Experiment (DEX) Mean Plot. TDR (N / Y) Exp Valve. AC ... – PowerPoint PPT presentation

Number of Views:233
Avg rating:3.0/5.0
Slides: 84
Provided by: BrianDo8
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Central Air Conditioner Test Procedure Public Workshop:


1
Central Air Conditioner Test Procedure Public
Workshop
A Technical Discussion On New Defaults for
2
(No Transcript)
3
Key Issue Balance the two main effects from lo
wering the Cooling
Mode Cyclic Degradation Coefficient,
  • Reduced testing burden
  • Higher SEER ratings for some models

4
NISTs Goal
Provide the Technical Basis for Choosing New
Defaults for the Cooling
5
OUTLINE
  • Background
  • Data
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings

6
OUTLINE
  • Background
  • What is ?
  • How does affect SEER?
  • How burdensome is the testing?
  • Data
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings

7
What is ?
  • Used in quantifying part load performance
  • is the slope (linear fit) of the PLF (
    ) versus CLF
    curve
  • Used to calculate SEER,
  • For single-speed systems, use a short-cut
    calculation
  • For two-capacity and variable-speed systems, use
    a bin calculation

An exponential fit was considered but not
adopted in the 1988 revision.
8
Part Load Factor (EERcyc / EERss)
Cooling Load Factor (-)
Press for an Additional Plot
9
  • has a bigger effect on single-speed
    units versus two-capacity and variable-speed
    units
  • SEER Gain for single-speed systems
  • 0.25 versus 0.20 0.15 0.10
    0.05 0.00
  • SEER Improvement 2.9 5.7 8.6
    11.4 14.3
  • SEER Gain for modulating systems depends on the
    degree of unloading (Below example 47 unloading
    at 82F)
  • 0.25 versus 0.20 0.15 0.10
    0.05 0.00
  • SEER Improvement 0.9 1.9 2.7
    3.6 4.4

10
How Burdensome is the Testing?
  • determined from two dry coil tests
  • One Steady-State Test (the C Test)
  • One Cyclic Test (the D Test)
  • . . . takes approximately 40 of all laboratory
    testing time (York written comments)
  • C D testing simply confirms what is already
    known and wastes valuable time (Carrier oral
    statement)
  • Extra test time 6 to 12 hours
  • Extra set-up and tear down
  • Time to dry indoor room
  • C Test longer if using outdoor air enthalpy as
    check
  • Depends on the number of cycles run for the D Test

11
How Burdensome is the Testing?
  • ARI Certification Process and Costs
  • ITS sets up for cyclic testing (DOE damper
    boxes)
  • The wet-coil A and B tests are conducted
  • Check Is SEER w/ default 0.95 ? Certified
    SEER
  • Manufacturer may elect for ITS not to do C D
    tests if the unit passes using the
    default
  • Manufacturer saves 750 if C D Tests are not
    conducted

12
OUTLINE
  • Background
  • Data
  • Data Sources Used by NIST
  • Is the Sample Biased?
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings

13
Hardware Features That May Affect
  • Type of expansion device
  • With or without indoor fan OFF delay
  • Air conditioner versus heat pump
  • With or without a liquid line solenoid valve
  • Size/capacity of unit
  • Type of compressor (scroll versus reciprocating)
  • Internal volume of indoor/outdoor coils
  • Parasitic power
  • With or without an accumulator
  • Quantity of refrigerant charge
  • Type of refrigerant

Included in final data set
Not Included in final data set
14
Factors Included in ARI-Provided Data Set
  • Lab measured (at ITS)
  • Rated cooling capacity
  • Unit Type
  • Packaged (blower coil)
  • Split blower coil
  • Split coil only
  • Rated SEER
  • Indoor fan delay (Y/N)
  • Liquid line solenoid (Y/N)
  • Type of expansion Device
  • Fixed orifice
  • TXV bleed
  • TXV non-bleed
  • AC or HP
  • Compressor Speed (1 or 2)
  • Niche products

Off Cycle Equalization (Y/N)
Affected Final Data Set used by NIST
15
Data Sources Used by NIST
  • ARI Certification Data from Year 2000
  • Basis for evaluating alternative defaults
  • CEC Appliance Certification Database
  • Used in evaluating the distribution of

  • Used in evaluating the distribution of SEER
    decimal
  • Large sample 6787 entries 7218 for SEER
    decimal
  • ARI Primenet Database
  • Used in evaluating the distribution of SEER
    decimal
  • Very large sample 55935 system manufacturer
    ratings
  • 25736 independent coil manuf. ratings

16
Data Sources Used by NIST
  • ARI Certification Data from Year 2000
  • All data from ITS lab testing
  • ARI-provided data set 339 units
  • Data set used by NIST 322 units (exclude
    two-capacity and niche products)
  • 648 units tested as part of Year 2000
    certification program
  • Assume the total sample of 648 units is
    representative of the overall population.
  • Question Are the 322 units representative of the
    overall population?

17
Data Set Used by NIST 322 of 339 Units
Measured
18
Normalized Histogram ARI CEC Data
Frequency of Occurrence ()
Value
Difference Suggests Possibility that one Data
Set is Slightly Biased
19
OUTLINE
  • Background
  • Data
  • Influential Hardware Features
  • Industry Recommendations
  • NIST Analysis
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings

20
Industry Recommendations
  • Written comments from ARI and Trane
  • Both support 3 categories A, B, and C
  • A equalizing without indoor fan delay
  • B non-equalizing without indoor fan delay and
  • C non-equalizing with indoor fan delay
  • Both acknowledge significant data scatter
  • OEMs have data showing scrolls provide ?0.015
    lower than recips (Copeland)

equalizing with indoor fan delay
(Standard deviations from 0.036 to 0.055)
21
NIST Analysis
22
Exploratory Data Analysis Screen to Find Primary
Factors
  • Graphical data analysis
  • Scatter Plots (Is a factor important?)
  • Mean Plots (Is a factor important?)
  • Box Plots (Is a factor important?)
  • DEX Median/Mean Plots (Most important factors?)
  • Multi and Subset Plotting (Interactions?)
  • Block Plot (Is a factor robustly important?)
  • Classical data analysis
  • Regression (Capacity)
  • ANOVA (one-way and two-way)
  • t-test

23
Design of Experiment (DEX) Mean Plot
0.15
0.10
0.05
0
Coil-Only / Blower Coil
Nominal Capacity
TDR (N / Y)
Exp Valve
AC / HP
Split / Packaged
24
Box Plots
Measured
1.5 2 2.5 3 3.5 4 4.5 5
FO
TXV-B
TXV-NB
W/O TDR
W/ TDR
Nominal Capacity
Type of Expansion Valve
Measured
COIL-ONLY
BLOWER COIL
SPLIT
SINGLE PACKAGE
AC
HP
25
Design of Experiment (DEX) Mean Plot
0.15
0.10
0.05
0
Nominal Capacity
TDR (N / Y)
Exp Valve
AC / HP
EQUAL / NONEQUAL
ARI
ARl MODIFIED
EQUIP TYPE
26
Box Plots
Measured
FO
TXV-B
TXV-NB
W/O TDR
W/ TDR
1.5 2 2.5 3 3.5 4 4.5 5
AC
HP
Type of Expansion Valve
Nominal Capacity
Measured
A
C
B1
B2
A
C
B
EQUAL
NON-EQUAL
SP-CO
SP-BC
PACK
ARI-Modified
ARI Equipment Category
Type of Equipment
Equipment Category
27
Primary Factors
  • Leading candidates
  • Industry recommended equipment categories (A,B,
    C)
  • Provides option for more resolution
  • Category B1 non-equalizing without indoor fan
    delay
  • Category B2 equalizing with indoor fan delay
  • Relatively generic allows multiple options for
    qualifying as equalizing versus
    non-equalizing
  • Rated capacity
  • Further comparison of the leading candidates
    deferred until the next section

28
OUTLINE
  • Background
  • Data
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Industry Recommendations
  • Based on Percentiles
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings

29
Industry Recommendations
30
ARIs Recommended Defaults
  • Assign defaults based on the mean values
  • Rationale
  • The current default was set over two decades ago
    and the technology has advanced so drastically
    that the default value is now virtually
    useless.
  • Scatter in the data is expected given the
    manufacturing and test tolerances.
  • The overall net effect on ratings is close to
    zero, as would be expected using the mean
    values.
  • . . . with the mean values it is possible to
    eliminate as much as 50 of the DOE testing.
  • The time and the money saved with fewer tests
    will help industry bring new products to the
    market at a faster rate.

31
Tranes Recommended Defaults
  • Assign defaults based on a one-sided upper
    confidence (tolerance) interval of 90
  • Considered AC and HP separately
  • For simplification, we would favor a single set
    . . ., the higher of the AC or HP default
    for each equipment category (A, B, and C)

32
Tranes Recommended Defaults
  • Rationale
  • Data variability is extremely high . . . using
    the means would significantly degrade consumer
    confidence in the SEER ratings.
  • . . . the confidence will be 90 that tested
    values will be less than or equal to the
    defaults.
  • If lower defaults are adopted, we feel
    they should not apply to equipment with
    continually energized compressor sump heaters.

33
Industry Recommended Defaults
Proposed defaults based on the same data
set 334 units tested in 2000 at ITS as part of
the ARI Certification program.
34
Consideration of Different Default Levels
  • Chose percentiles over (upper) tolerance
    intervals
  • Avoid concerns over departure from a normal
    distribution
  • Data more choppy but most accurately reflects the
    data
  • 9 Percentiles Levels Considered 50 to 99
  • More insight on equipment categories
  • Ideally want rank ordering of equipment
    categories not to change if considering one
    percentile level versus another (stratified data
    versus spaghetti data)

35
Percentile Trends Industry Recommended
Equipment Categories
Percentile
36
Percentile Trends Industry-Modified Primary
Factors
Percentile
37
Percentile Trends Rated Cooling Capacity
5.0 Ton
1.5 Ton
4.5 Ton
2.0 Ton
4.0 Ton
2.5 Ton
3.0 Ton
3.5 Ton
Percentile
38
Equipment Category
39
Equipment Category
40
Pros and Cons Using One, Three, or Four
Default Categories
41
Pros and Cons Having 1 Default Category
  • Pros
  • Simplest Approach
  • Maintains the original/present test procedure
    practice
  • Easiest to implement at the independent test lab
    dont have to figure out what features a
    particular unit has
  • Cons
  • Doesnt provide any means for differentiating
    between hardware features
  • Greatest benefit to those units that do the
    poorly on the C and D Tests

42
Pros and Cons Having 3 Default Categories
  • Pros
  • Provides differentiation between hardware
    features
  • The credit from adding a component that results
    in non-equalization (e.g., non-bleed TXV) is the
    same as adding a time delay off relay ? 2 groups
    within Category B
  • Would be preferred by stakeholders who seek
    greater sales of non-bleed TXV units
  • If wanting to achieve Category B status, there is
    no difference in the incentive for adding a
    TDR versus using a non-bleed TXV

43
Pros and Cons Having 3 Default Categories
  • Cons
  • Fan delay is statistically more important than
    equalization / non-equalization ? supports
    breaking Category B into its 2 subgroups
  • Independent lab has to additional work to verify
    the units default equipment category

44
Pros and Cons Having 4 Default Categories
  • Pros
  • Provides even more differentiation between
    hardware features
  • Breaking Category B into its two subgroups is
    statistically justified
  • Cons
  • Gives a slightly greater incentive to add a TDR
    versus adding a component that results in
    non-equalization, like a non-bleed TXV
  • Independent lab has to additional work to verify
    the units default equipment category

45
Where Are We? (Inhale . . . Exhale)
  • Considered
  • Which equipment features to base the new
    defaults
  • Possible numbers for the new defaults
  • Time To (Momentarily) Change Gears
  • Put the raw data aside for a moment
  • Try to model rating tendencies of manufacturers
    and the Pass/Fail criterion of ARI certification
    testing

46
OUTLINE
  • Background
  • Data
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • 3 rating approaches plus ARI certification
  • Doesnt depend on the data!
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings

47
Consider 4 Rating Scenarios
  • C and D Tests Conducted to Meet DOE Rating
    Requirements
  • Considered 3 Scenarios (Y1, Y2, and Y3)
  • Summarized on next slide
  • C and D Tests Conducted to Meet ARI Certification
    Requirements ? One Scenario (Z)
  • Manufacturer projected not to test if

Note Scenario Z is a maximum impact case
because some manufacturers will choose to test at
ITS even if their unit passes using the default
48
Rating Scenarios Y1, Y2, and Y3
Manufacturer projected to test only if
Y1.
or
Middle-of-the-road believed to be the most
representative
case best prediction of how much testing will be
reduced
Y2.
More liberal highest predicted impact
Y3.
even if only by 0.01
Most conservative lowest predicted impact
49
ARI-Provided Dataset 324 Entries
(Exclude Apparent Lab Measured SEER Values)
Frequency of Occurrence ()
Decimal Portion of the Rated SEER
50
CEC Database 7218 Entries
Frequency of Occurrence ()
Decimal Portion of the Rated SEER
51
ARI PrimeNet 10 System Manufacturers
(55935 ACHP Entries)
Frequency of Occurrence ()
Decimal Portion of Rated SEER
52
ARI PrimeNet 4 Independent Coil Manufacturers
(25736 ACHP Entries)
Frequency of Occurrence ()
Decimal Portion of Rated SEER
53
Projected Test / No-Test Conditions
  • Generate SEER Grid EERB Versus
  • EERB from 10.0 to 17.0 in 0.1 increments
  • from 0.25 to 0.0 in 0.01 increments
  • For Each Combination of Possible Default Level
    and Rating Scenario (Y1, Y2, Y3, Z)
  • Consider each SEER block on grid
  • Assign to each block Yes/No Regarding Testing
  • Determine Percentage of No Test blocks for each
    level and five different EERB ranges
  • 10.0 to 17.0
  • 10.0 to 15.0 ? Best Match for Equip. Category B1
  • 10.0 to 13.0 ? Best Match for Equip. Category A
  • 11.0 to 15.0 ? Best Match for Equip. Categories
    B2 C
  • 11.0 to 13.0

54
  
Links to SEER Grid Examples
  • Present Situation Default 0.25
  • Scenario Y1
  • Scenario Y2
  • Scenario Z
  • Theoretical Case 1 Default 0.15
  • Scenario Y1
  • Scenario Y2
  • Scenario Z
  • Theoretical Case 2 Default 0.075
  • Scenario Y1
  • Scenario Y2
  • Scenario Z

55
End Product Estimates of the Projected Reduction
in C and D Tests Associated with different
Default Levels
  • For the 4 Rating Scenarios
  • For the 5 EERB Ranges
  • 3 Examples
  • Default 0.20
  • Default 0.12
  • Default 0.05

56
Default 0.20
Rating Scenario Y1
Rating Scenario Y2
Rating Scenario Y3
Percent Reduction in Testing
Rating Scenario Z
57
Default 0.12
Rating Scenario Y1 (.00, .50)
Rating Scenario Y2 (.00)
Rating Scenario Y3 (? ? 0.01)
Percent Reduction in Testing
Rating Scenario Z (ARI Cert.)
58
Default 0.05
Rating Scenario Z (ARI Cert.)
Rating Scenario Y1 (.00, .50)
Rating Scenario Y2 (.00)
Percent Reduction in Testing
Rating Scenario Y3 (? ? 0.01)
59
OUTLINE
  • Background
  • Data
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Now overlay the data
  • Tabulate all the options
  • Impact on Raising SEER Ratings

60
Now Overlay the CD Distribution Associated With
Each Equipment Category
  • Distributions for Each Equipment Category (A, B1,
    B2, and C)
  • Estimated Scatter Plot Used to Select EERB Range
    (EERB estimated from measured and rated
    SEER)
  • Histogram Used to Project Reduction in Testing
  • Projected Testing Reduction for each set of
    Defaults and each Rating Scenario
  • Percent reduction for each equipment category
  • Percent reduction for all equipment categories

61
Equipment Category C
Estimated EERB
Selected EERB Interval
Measured
62
Histogram Equipment Category A
Frequency of Occurrence
63
Histogram Equipment Categories B1 and B2
Frequency of Occurrence
64
Histogram Equipment Category C
Frequency of Occurrence
65
Overlay Rating Scenarios for Default
0.20
With Equipment Category A Histogram
Scenario Y1
Scenario Y2
Scenario Y3
Scenario Z
Percent Reduction in Testing
Frequency of Occurrence
Eq Cat A
66
(No Transcript)
67
(No Transcript)
68
(No Transcript)
69
(No Transcript)
70
(No Transcript)
71
(No Transcript)
72
(No Transcript)
73
(No Transcript)
74
OUTLINE
  • Background
  • Data
  • Influential Hardware Features
  • Possible Sets of Default Values
  • Manufacturers Rating Decisions and ARI
    Certification Requirement
  • Predicted Reduction in Test Burden
  • Impact on Raising SEER Ratings
  • General effect
  • Relative to the NAECA minimums

75
Slides for this section will be added at a later
date.
76
Suggested Refinements
  • Seek information that will help determine whether
    the data set is representative of the
    overall population or biased.
  • Seek information () from ARI/ITS on how many
    units pass using the existing 0.25 default
    but still have the C and D Tests conducted.
  • Others?

77
Appendix
78
Calculating and Using It To Determine
Cyclic Efficiency
  • Calculate based on 2 tests
  • Apply to estimate performance at multiple
    operating conditions

Press to Return
79
Comparison of Two PLF Curves Corresponding to Two
Different Cyclic Degradation Coefficients
Part Load Factor (EERcyc / EERss)
Cooling Load Factor (-)
Press to Return
80
Calculating SEER for Single-Speed Systems
  • Early on, found that
  • SEER(Short-cut Method) ? SEER(Bin Method)
  • Short-cut Method adopted for Single-speed
    systems
  • Short-cut Method Parameters
  • Only Need EERB and the Part Load Factor (PLF)
    corresponding to a 50 load factor
  • Use to get PLF(50)

81
SEER For Single-Speed Systems
The Single Point Used From the PLF vs CLF Plot
Part Load Factor (EERcyc / EERss)
Cooling Load Factor (-)
82
Calculating SEER for Single-Speed Systems
  • Equation Derivation,

Press to Return
83
Equipment Categories
Press to Return
About PowerShow.com