September 2004 doc.: IEEE802.11-04/1166r0

IEEE P802.11
Wireless LANs

Minutes for the Task Group T September 2004 Session

Date:

September 14 - 16, 2004

Authors:

Areg Arimian, Roger Skidmore
e-Mail: ,


Monday, September 13, 2004

4:00 PM – 6:00 PM

  1. Chair calls the meeting to order at 4:00 PM – Charles Wright
  2. Chair comments
  3. Mike Goettemoeller appointed secretary for Tom Alexander who is absent
  4. Chair asks for objections for Portland minutes, none noted, Portland minutes accepted.
  5. Chair asks for objections to agenda, none noted, agenda accepted.
  6. Chair has made a call for presentations
  7. 11-04/1009 – “Framework, Usages, Metrics Proposal for TGT”, Pratik Mehta
  8. 11-04/1017 – “Comments on Wireless Performance & Prediction Metrics”, Mark Kobayashi
  9. 11-04/xxxx – “Systems supporting devices under test”, Mike G.
  10. Chair asks for approval of modifications to agenda, note noted, revised agenda accepted.
  11. Discussion of timeline going forward, progress since Portland – IEEE NesCom approval to form TGT, technical presentations, re-present ideas.
  12. Chair requested to comment on teleconferences and status of group as a whole – agreed that metrics need to be derived first before going forward.
  13. Chair makes move to recess until 4pm tomorrow; Roger makes motion, Lars 2nds. Approved by acclaimation.

Tuesday, September 14, 2004

4:00 PM – 6:00 PM

  1. Chair calls the meeting to order at 6:00 PM – Charles Wright
  2. Roger Skidmore appointed secretary
  3. Technical Presentation – Framework, Usages, Metrics Proposal for TGT – 11-04/1009r1 – Pratik Mehta
  4. Comment – Requesting an example of a submetric as described on slide 11.
  5. Comment – Presenter response is that packet loss would be an example submetric of the throughput metric.
  6. Comment – On slide 12, directionality could be an important factor in non-data oriented applications as well.
  7. Comment – Chair calls for discussion.
  8. Comment – Engineers tend to emphasize the confidence level of wireless measurements.
  9. Comment – Presenter responds that confidence levels and similar things of a statistical nature are part of the methodology used to measure a given metric/submetric.
  10. Question – Taking an application-centric approach will allow the group to better yield results in a reasonable amount of time?
  11. Comment – Presenter refers to slide 10, responding that it is better to map where the group wants to be as part of the process.
  12. Comment – Testing a device for the home may be very different from testing a device for an enterprise even for the same application.
  13. Comment – Presenter responds that defining the usage and environment (per slide 10) helps partition the problem.
  14. Question – How do you know that you are actually testing an “average” device rather than a “best-of-the-batch” or “worst-of-the-batch” type device?
  15. Comment – Presenter responds that some sort of calibration, normalizing, or random sampling procedure may be required.
  16. Comment – Being sensitive to the amount in variability in the tested response of a device is important.
  17. Question – Any thought to common sources of interference and how those affect certain receive architectures?
  18. Comment – Presenter responds that no specific thoughts have been centered on that topic as yet.
  19. Chair – As in interferers unduly affecting a test?
  20. Comment – Uses in a home may involve myriad devices including non-802.11 devices.
  21. Chair – Topic begins to border on a co-existence issue.
  22. Comment – Certain radios have more robust interference rejection.
  23. Question – Does this mean that multiple environments need to be introduced for individual metrics?
  24. Comment – Presenter responds that it is possible that metrics and submetrics will be tested differently across different use cases.
  25. Comment – Prediction folks should be able to identify what types of interference scenarios should be analyzed.
  26. Question – Is this considering a single device under test or a linked set of devices?
  27. Comment – Presenter responds that typically single devices have been considered, but the group needs to take this as a point to discuss.
  28. Comment – Should be looking at packet errors as a metric tested against multipath and other physical layer things. Can deduce throughput from packet error rate.
  29. Comment – Presenter responds that the user expectation/view is typically in terms of throughput, but that packet loss could/should be a metric listed on slide 13. Slide 13 was not intended to list every possible metric.
  30. Question – Final step in process on slide 10 is prediction. When will there be a focus on prediction?
  31. Comment – Presenter responds that predictions will have a better chance of being accurate with measurement data. Group should certainly tackle predictions.
  32. Chair – The group is no longer predicting performance.
  33. Question – Was prediction was cut from scope?
  34. Chair – Yes. It is within scope “to enable prediction”. The development of prediction models and prediction algorithms do not fall within the scope of the group.
  35. Question – Beginning to hear talk of device classification. Is it within the scope of the group to discuss device classes?
  36. Chair – Device classifications or qualifying devices is outside of scope. Ranking or qualifying devices is not the goal of the group. The output of the group could be used to enable device ratings, but the goal of the group is not create ratings.
  37. Comment – Believe that group is on track to enabling measurements, but do not have a feeling on what the group can do to enable predictions. Request for presentation on predictions and what is needed to enable them.
  38. Technical Presentation – Comments on Wireless Performance & Prediction Metrics – 11-04/1017r0 – Mark Kobayashi
  39. Comment – On slide 10, producing something that simply says system A is “better” that system B is worthless. Need something quantifiable. For example, quantifying device range.
  40. Chair – Comment that range may vary depending on the type of test.
  41. Comment – Presenter comments that could develop categories of particular tests (e.g., bad channel models) that may help simulate actual conditions.
  42. Chair – In other words, want to map “user experience” into some form of repeatable test. What do people think about the channel models developed for .11n?
  43. Comment – You’ll get differing responses on that.
  44. Chair – Need to solicit input on channel models from the larger 802.11 group.
  45. Comment – 802.19 utilizes channel models for coexistence.
  46. Comment – Interference and multipath can be tested separately from channel models. Certain “barebones” metrics need to be tested and dealt with separately.
  47. Question – Does the material presented in 1017r0 conflict with that in 1009r1?
  48. Comment – Presenter indicates that 1017r0 and 1009r1 are very complimentary in terms of the basic framework. Presenter prefers to look at usage model 1, 2, and 3 and produce a set of common metrics across all usage models; the goal being to identify similarities in the tests performed for each usage models.
  49. Comment – Is a timeline being proposed?
  50. Comment – Presenter indicates no specific timeline is being suggested, but believes work needs to get underway.
  51. Comment – Need to limit time spent discussing metrics in order to help move the work flow along.
  52. Comment – Considering common metrics across platforms/usage models will be beneficial.
  53. Comment – The presentations 1017r0 to 1009r1 appear to have distinct approaches. 1009r1 appears to address “real” usage models and seems to cover a large audience. The specific test environment for a given submetric test (per 1009r1) can characterize the submetric/metric very well. If the group were to only do Cabled Environment (for example), and then go one metric at a time, it will be somewhat limiting in terms of the usability of the group’s output.
  54. Comment – Presenter indicates that sets of identical/near identical tests need not be repeated for different usage models.
  55. Question – Would it be beneficial to mimic a measured channel using a waveform generator with devices in an anechoic chamber in order to achieve repeatability in a controlled environment?
  56. Comment – Presenter indicates that the suggestion is possibly a good way of performing a test.
  57. Comment – There will be a flow of data building up from semiconductor companies, to manufacturer, to system integrator, to IT manager. Each one higher up the chain is relying on decisions/results of those below them. TGT should try to get everyone along the chain talking the same language.
  58. Comment – 1017r0 and 1009r1 are actually very different from one perspective. What is needed first is a test from the point of view of the user. Then afterwards, when that particular use case is finished, begin working backward to deal with other usage cases. Commenter is concerned that time will be wasted trying to analyze too many potentially very different usage models looking for commonalities rather than tackling one particular set of usage models, finishing them, and then proceeding.
  59. Meeting in recess at 6:00 until 7:30.


Tuesday, September 14, 2004

7:30 PM – 9:30 PM

  1. Chair calls meeting to order at 7:40 PM
  2. Chair – Open forum for discussion on previous presentations – brainstorming
  3. Comment – Need to avoid discussions of metrics without having an end goal or at least a particular use case in mind.
  4. Comment – What about a test configuration?
  5. Comment – Can’t discuss a test configuration without knowing what metric you’re searching for. Suggest considering metrics in different levels geared towards a particular audience (e.g., system integrators, semiconductor, etc.).
  6. Comment – Should not want to specify a particular vendor’s test equipment (i.e., “golden node”). For example, could have two devices under test where you simply set them up side-by-side, but that may not be valid.
  7. Comment – Difficult to completely mimic all test conditions without some form of standardization of test equipment. Is a faulty test due to the device under test or because the test equipment or conditions were ever so slightly off?
  8. Comment – Possible test flaws could be listed as test preconditions (e.g., could a listed as limitations of testing systems) or test “gotchas”. Then if test was not 100% repeatable, would be due to one or more of the possible test preconditions.
  9. Comment – For throughput and forwarding rate, we know what theoretical maximums are. For latency and jitter, have no minimums other than zero. So preconditions could also include bounds.
  10. Comment – Varying degrees of test could also be a measure of the repeatability of the test. The more repeatable the test, the closer the test preconditions would need to be to the required, precise test conditions and/or setup.
  11. Question – Is non-repeatable/instantaneous measurement testing infringing on .11k?
  12. Comment – .11k could serve as a vehicle for fulfilling the mechanism or procedure used in the test.
  13. Comment – There is a big difference between a company with heavy investment in test equipment/labs and significant experience in equipment analysis and an IT manager who is evaluating a product offering. Those two categories of “consumers” for TGT’s output are very different in their needs.
  14. Comment – There is no way to simulate in a lab every possible scenario an IT manager may experience, it is possible to specify sets of tests an IT manager could take the time to setup and perform. The sets of tests may have varying complexity and/or range of error based on how closely the IT manager can duplicate the test conditions/preconditions.
  15. Comment – Do we know how current IT managers qualify equipment and network configuration?
  16. Comment – It varies widely. The wireless knowledge of IT managers factors heavily into this.
  17. Question – Is this group going to focus on how to deploy a network?
  18. Chair – No, this is outside of the scope of this group. This group is focused on measuring device performance under certain test conditions and then utilizing that information to make performance predictions later.
  19. Comment – At the end of the day, we need to measure metrics.
  20. Comment – The types of tests that you need to run will vary depending on who is doing the testing. Semiconductor companies need different tests than IT managers. Need categories of tests for categories of users.
  21. Example PHY layer metrics
  22. Packet error rate (PER)
  23. Factors inherent in the device (recommended practice would specify bounds on the following for particular PER tests)
  24. Error vector magnitude (EVM)
  25. Transmit power
  26. Receive sensitivity
  27. Comment – What is the most representative controlled test for 802.11 wireless performance?
  28. It would exercise the entire device in one test.
  29. It would focus only on performance aspects affected by the MAC, PHY, or antenna.
  30. It would be a “macro” measurement, not a “micro” measurement (e.g., focus on “communication system” level metrics – forwarding rate, loss, delay, jitter, antenna gain, FER vs. input signal level).
  31. Comment – Why use layer 2 traffic for testing?
  32. TGT must stay in the domain of 802.11
  33. Using true or simulated application traffic obscures the effects of wireless
  34. TCP is a common offender.
  35. Specifying application level traffic makes it hard to decide where to stop (e.g., which favorite video or voice standard do we exclude?)
  36. Comment – The “big four” layer 2 measurements are forwarding rate, MSDU loss, packet delay, and jitter all made under various conditions (e.g., with and without multipath, with and without adjacent channel interferers, and varying signal levels)
  37. Question – Is multipath still an issue with OFDM?
  38. Comment – Yes. OFDM may be resistant to multipath, but papers presented here have shown that multipath needs to still be considered.
  39. Comment – All of the metrics we are discussing measuring for wireless have already been defined as well as procedures for measuring them on wired networks. TGT should focus on layer 2 metrics and leave upper layers alone (e.g., application layer).
  40. Meeting in recess at 9:35 PM until 4:00 PM Thursday afternoon.

Thursday, September 16, 2004

4:00 PM – 6:30 PM

1.  Chair calls the meeting to order at 4:00 PM

2.  Announcement regarding file server being temporarily unavailable for uploading documents

3.  New items added to today’s agenda:

  1. Technical Presentation -- 11-04/1131r1, “A Metrics and Methodology Starting Point for TGT”, Charles Wright, Mike Goettemoeller, Shravan Surineni, Areg Alimian
  2. Motions or straw polls arising from 11-04/1009r1 and 11-04/1131r1
  3. Technical Presentation -- 11-04/1106r0, “Systems Supporting Device Under Test”, Mike Goettemoeller
  4. Technical Presentation -- 11-04/1132r0, “Enabling Prediction of Performance”, Roger Skidmore

4.  Amendments to the agenda accepted by unanimous consent