Maria Papadopouli : In many performance analysis studies, researchers make assumptions or hypothesis without trying to validate them. In the context of wireless networks, we can discuss such hypothesis and the type of measurements and methodology that is required for their validation.

Constantin Dovrolis: what I found interesting in all papers presented is that every paper has identified some assumptions or hypotheses that people made in the past that maybe were not true, so makes us think how well do we understand things we use everyday. Are there really specific assumptions we should try to validate, especially assumptions that were taken for granted, ones we take for granted…

Mark Portoles-Comeras: in the line of my talk, we have seen examining previous territories where people want to gather real measurements data from experimental…but have made no effort to examine the tools are really working as the expect. They do not make survey of previous work.

So we foundout that maybe assumptionsmade by people can be summarized in 3 statements:

The experiments carried depend on the background of the person doing the experiments. People coming from experimentations in wired networks and move into wireless, may something like : ok my Ethernet box supports these workloads..

(to cont 05:02) other wireless cards do not support these workloads. My Ethernet card supported gigabit traffic but my wireless does not. Also I may want to measure throughput or delay and my Ethernet box supports that by my wireless card does not.

People come from wireless may say something like : we observed losses or collisions due to propagation errors, which is not always true because there may exists some hardware misbehaviors.

And the third statement: theory people say that commercial products conform to standards and that they are correctly interoperated. Something that is not always true

These statements summarize assumptions about the functionality of the machines used.

Maria Papadopouli: Let us discuss more about monitoring tools from the software perspective…

Mark Portoles-Comeras: Bandwidth estimation tools do not always generate/collect packets the same way you may expect, it is possible to not use the same tcp version to collect packets, back-2-back packets not the same for all applications. One should really be careful on the application. If an application says that it generates CBR traffic generation, one should check the validity of the software. Is the traffic really at a constant bit-rate? .

Spend some time characterizing the functionality of both the software and hardware and see if they really do what you expect.

Christina Fragouli: Not being from this area: working on higher layers I need a good simple model (wireless channels, delays how all these processes behave). Is there a study that relates complexity of the model with it’s accuracy?

Constantin Dovrolis: There is a possibility that there is no simple model to exist. (eg wireless link) In physics or biology for example, some times there complex systems for which no simple model exist. So the community should seek if there is a good simple parsimonious model for wireless channels

Stefan Kaprinski: There are two different types of modeling.

Modeling of physical phenomena where you can get very exact models and you can show that they work well. But here randomness is build in and not easy to understand and you have to draw a line where you know enough is enough. You can keep complexying your model. What is the criterion to see where it matters if it effects what you are trying to figure out. Common in econometrics, biometrics. A lot o solutions have used in these sciences have not yet been applied to computer science.

Charalambos Charalambous: the issue of modeling when things work and the performance are interlinked,

what people in the control community want is to build a reliable system that has reliable performance, with a simplified model which is not really the exact model, but will work for a whole family of systems. The more you know about the model, the less uncertainty you have and better performance. People , as they observed the behavior of the system, they try to shrink uncertainty thanks to the model. And this is adaptive robustness. Having said that, this is (philosophically) thanks to george james this is how I understood modeling from my discussions with him. Eg planes work on control models which are robust and perform well.

Stefan Kaprinski. You are talking from systems that have models which are complex but people know how the system works in theory. Is it correct?

Charalambos Charalambous:. Not exactly. People may know how this input-output relationship between a system contains a very large number of equations…to capture everything. But this is too complex. So they want to do something that will needa lower number of dimensionsbut will work on real systems. In other words they use an approximate of the true system. Then they do something that is called robustness from the point of min max. If you try to minimize something, nature works on the opposite of you so the uncertainty is maximized. This is conservative and you lose performance. The less the uncertainty the better the performance. An example of simplified models that work are those used in industry in PID controllers. If you have a complex system you may need a complicated controller. But simple things work. Eg with circuitry

Stefan Kaprinski: there are cases where you have a difficult problem you cannot solve but there are simplified models to approximate them. But there are cases we have no simplified models you don’t even have a complex model to simplify… you do not know how things interact.

Charalambos Charalambous. Then look things as input-output. Try to describe the input-output operation.

Maria Papadopouli:: As Stefan mentioned, the performance of a sensitivity analysis is important for understanding the impact of different parameters on the performance of a system. This will also guide the modeling efforts, namely, the selection of the concepts and parameters that need to be modeled accurately. For example, let us say that we are trying to understand an admission controlprotocol. In that case, packet level details or models may not be important. Instead, client- and flow-level data may be necessary for understanding an admission control mechanism. This observation may simplify things: We can ignore packet level details and focus on modeling client or flow-level data (e.g., how flow or client arrive at a certain AP or wireless infrastructure, flow sizes, etc). Furthermore, this aggregation at the flow or client level can be amenable to statistical analysis…

Charalambos Charalambous: this is exactly how our people are doing it: people try become sensitive to find this sensitivity functionfor example disturbance on the input output. If you minimize your maximum level of sensitivity whatever you get will be robust.

YorkWitmark: the really important thing ( I am also not from this community ) are papers like the one that Stefan’s presented: if you have the choice of different models that are implemented you just take the most accurate model, simulate with it and that is the best you can do. The question is how far should you really go to make it accurate?. It would be helpful to have papers that tell you, these things should interest you to improve performance ( of a protocol e.g ) and some others may not interest you. Researchers want to show the performance of the protocols and not the validity of the model.

Leandros Tasioulas: I’ll change the subject: I was asked to present my position on what algorithm people could expect from the measurement community ( I am not part of the measurement community ). Usually the tools I use to guide my intuition is analysis and I rarely resort to simulations and never use measurement. So have some issues and questions:

Q: if there are examples from the measurement community with open questions that were answered by measurements, then review and elaborate.

Some interesting points that may interest people that work on performance and algorithms are about mobility and how can it be captured, how difficult it is, and what measurements should be done. Channel is different it is different to characterize the mimo channel with mobility with the measurements on a system level. What channel attributes should be measured to do this characterization?

I was part of a measurement test: in a conference I was, they were distributing wireless devices to measure proximity and contact between people. So it was a set of measurements carried out by wireless devices and they wrre used to examine from a social dynamics perspective to examine how human communication evolves.

Vehicular mobility is an area that seems to be very interesting: There is a couple of papers studying the effect of mobility on traffic characteristics from the channel perspective especially how traffic changes as the vehicle passes from certain checkpoints.

Constantin Dovrolis:

To answer your first question: “you would like to know that there were success stories where measurements provided different directions and helped understand some things better…” ( most of my work is not on wireless but on wired networks )

As you know in 70’s 80’s there was this school of thoughtthat approached evaluation with poisson and markovian processes, so if you were talking about traffic, people would expect it to be as a sequence of packets coming either with not correlation one coming after another without correlation. In the 90’s people of measurements found structure and patterns ( e.g asymptotic behavior, self similarity ) that were smoothed out in the previous models with markovian models in large time scales. This completely changes the way we understood performance of the network. It completely changed the way you considered loss rates, queuing delay and buffer delays eg: For a long time, people said that the amount of buffering that you need should be limited if you operate your link with a moderate utilization, When you do measurements or simulations with long range depended traffic, youwould find that you need significant buffering and if you have small buffers and markovian models failed to predict this and had huge losses.

Performance evaluation in networks, especially when itcomes to queuingcompletely changed in 90s thanks to the work of Willinger Evan and others.

Another success story is how think of topology: For a long time, if you asked somebody, ok : “draw me a picture of the network”, he would come with something that looks more like a random graph with links that use the Waxmann model etc. and then people measured networks and found that these things are scale free, they have very well defined structure ) with hubs and nodes that connect to hubs with properties

that random graphs don’t even come close to such as clustering and small world phenomena and again completely changed the way we think about routing….

And this is not only for networking research, throughout science whenever we measure sth out understanding deepens, not only with natural things but with artifacts, we are surprised even with artifacts – things that we engineer, ever after a level of complexity, these artifacts start having propertiesthat nobody designed. Properties that come as a result of more complex interactionsbetween the different elements as an interaction of the elements.Throughout science you see the same thing. With measurements we are always surprised and this provides modeling with a lot of interesting questions.

Maria Papadopouli: There are less modeling studies in wireless networks than in wired ones.However, there have been studies on modeling different parameters such as the density of nodesor APs, in certain environments, the traffic demand, the wireless infrastructure. Other studies focus onthe asymmetry of wireless links, or evaluate the performance of short routes with large delays vs. long routes with short delays (per hop) in mesh networks. In general, wireless networks can be quite complex and more vulnerable than the wired networks. Furthermore, they exhibit various types of transient phenomenathat are quite difficult to capture.

Mark Portoles-Comeras. So does the user changefrom time to time?

Maria Papadopouli: The spatiotemporal dimension and the complexity in different timescalesmake the problem harder.More and more networks are being deployed and wireless traffic grows rapidly. It is important to select the right spatio-temporal granularity for your study and also understand the scaling properties of the performance of the system.

Stefan Kaprinski: the relevance of my work hypothesizes that wireless networks will not be able to be over-provisioned because this kind of analysis is not really interesting in wired networks because who cares? There is enough available bandwidth to the backbone network. A case not applicable in wired. Will we ever figure out how to do this in wireless networks?

Maria Papadopouli: It works well in wired networks, but overprovisioning in the wireless arena can be counterproductive especially due to interference. You need to carefully design your network, place and configure the APs, their transmission power, channel…Studies have shown how the orientation of the antenna and the position impacts the performance…

Sinker: this would be nice to work on, (look at the kumar? Paper )over-provisioning is really not helping but this is a theoretical paper, but has anyone ever done this? Can we overprovision the wireless network? What would the problems? Wired are more deterministic and you can do more things than in the wireless like traffic shaping, some of these cannot be done in wireless….

Christina Fragkouli: Also, are there new ways to use the wireless channel ?

Stefan Kaprinski: That was one of the thing I was interested in. UWB, network coding are some break troughs. going to be techniques that will enable us to do more with the channel.

Leandros Tasioulas: Overprovision is hard to do due to the cost.

Stefan Kaprinski: There is an alternative way to overprovision which is not place more APs. Bring up more spectrum. We are currently working with such a small piece of spectrum.

Leandros Tasioulasthe wireless hz or wireless bps is more costly than the wired…whether this is the license of the spectrum

Constantin Dovrolis: One interesting thing that I find about wireless research and especially wireless measurements: some time nothing in the measurement domain, because of difficulty to do this kind of experiments and now more and more people do testbedding. But most of the papers comes from the WLANs where it is significantly easier to do this kind of experiments. Long distance networks make it more difficult to perform this kind of analysis. There is not much measurement work on multihop networks

Maria Papadopouli: In general, there are four types of multihop networks:

  1. Wireless wide area networks (like ricochet-metricom). Mary Baker and her group at Stanford performed the first measurement study on this type of networks (to the best of my knowledge, there are very few such studies available).
  2. Mesh wireless networks: Since 2004, there have been several studies on their performance (e.g., on the roofnet network).
  3. Cellular networks (not many data publicly available).
  4. Sensor networks

Leandros Tasioulas: You need to have a network to measure. There are not many such real networks…

Maria Papadopouli:

Actually there have been some recent studies focusing on metropolitan area-based wireless networks: one in Cambridge, UK with users currying iMotes, in Toronto with Bluetooth-enabled PDA users walking in the subway and malls to test if a worm outbreak is viable in practice, and at MIT,with one hundred smart phones that use both short-range (such as Bluetooth)and long-range (GSM) networks logging users'location, communication, and deviceusage behaviour information… However, it has not been easy to perform large-scale non-controlled user-studies …

Stefan Kaprinski: Until we have the technology to work, you can’t setup a tesbed and then carry measurements.

?? Missing comment

?? -> I’d like to add a category on you discussion: zigbee and 802.15.4 etc, but I haven’t seen any measurements on that area.

Constantin Dovrolis: they are more proof of concept than measurements.

I am sure that the moment we start to do measurements in MANETS we will see amazing things because there is so little work in that area. The factors are very many

Stefan Kaprinski: systems grow complex and research should be interdisciplinary…