Week 3: Internet of Things
“Embedded Systems … Interfacing with the Physical World … Energy Harvesting … Ultra Low Power Computing in VLSI … Hardware for Machine Learning … Cloud Robotics … IoT Economics”
Summaries
- Week 3: Internet of Things > 3a Embedded Systems 1 > 3a Video
- Week 3: Internet of Things > 3b Embedded Systems 2 > 3b Video
- Week 3: Internet of Things > 3c Interfacing with the Physical World 1 > 3c Video
- Week 3: Internet of Things > 3d Interfacing with the Physical World 2 > 3d Video
- Week 3: Internet of Things > 3e Energy Harvesting 1 > 3e Video
- Week 3: Internet of Things > 3f Energy Harvesting 2 > 3f Video
- Week 3: Internet of Things > 3g Ultra Low Power Computing in VLSI > 3g Video
- Week 3: Internet of Things > 3h Hardware for Machine Learning > 3h Video
- Week 3: Internet of Things > 3i Cloud Robotics > 3i Video
- Week 3: Internet of Things > 3j IoT Economics 1 > 3j Video
- Week 3: Internet of Things > 3k IoT Economics 2 > 3k Video
- Week 3: Internet of Things > 3l IoT Economics 3 > 3l Video
Week 3: Internet of Things > 3a Embedded Systems 1 > 3a Video
- Embedded systems is looking at how to interface with the physical world with sensors and actuators.
- Microcontrollers have many on board I/O subsystems such as General Purpose Input Output, or GPIO, analog to digital conversion, digital to analog conversion, interrupts, clocks, timers, and buses.
- To access the microcontrollers many sub functions, we can connect its pins to external components such as sensors.
- In this example here, the pin 23 and 24 can be used for different functions such as the analog-digital converter or external interrupts.
- One of the most basic functionalities of IO pins is general purpose input output.
- In the input mode, it is reading the external signal.
- In the output mode, it is outputting a signal that’s either a high or a low.
- We can set the mode of pin 12 to the input followed by reading from pin 12 using value equals pin read 12.
- To create a value of 1, we can simply tie this pin to the supply voltage, VCC. To create a value of 0, we can tie this pin to ground.
- If we tie this pin 12 to VCC over 2 or leaving it untied to anything, the value of pin 12 is indeterminate and it could be any value.
- For this reason, often we have to pull this pin to either the ground or VCC using pull up or pull down resistors.
- To verify output of pin 12, we can simply apply a volt meter between ground and pin 12.
- If we want to read from GPIO 2, we can simply say input value equals GPIO.input with perimeter 2.
- It can allow us to create signals instead of values.
- The way to set PWM is to set a pin to high for some duration then set it to low for some other duration.
- Then set pin 12 to be low and wait for another 50 seconds.
- DAC allows us to convert digital signals to analog signals.
- If we wanted to turn on a light at different intensity levels, potentially what we want to create a analog signal that looks like the blue line here going from 0 to 3 volts.
- The way that we can achieve this using digital signal is to connect that to a DAC.
- One thing to keep in mind is that it’s not possible to create perfect analog signals using DACs.
- In this example here, instead of creating a smooth signal as in the blue line, we often have to create a quantized signal such as that right line here.
- So in this example here, we start with the desired signal on the top.
- We are trying to create a signal that starts with 0.9 of VCC followed by 20% of VCC. The way we can create this signal using PWM is to create a PWM pulse with a duty cycle of 90% followed by a duty cycle of 20%. After passing through the low pass filter, the signal will look like this.
- So as you can see here, this is not a perfect resemblance of the desired signal.
- It is to some degree similar to the final signal.
- Often with these external modules they are much more accurate in producing the desired signal.
- ADC is very important to us as well because it allows us to convert analog signal to digital signal.
- With many of the sensors out there, the output continuous analog signals.
- We have to somehow convert that to digital signal for us to work with.
- The way that ADC works is by sampling the continuous analog signal at discreet intervals.
- What we are doing here is to quantize the range between ground and VCC into many different levels.
- If we have a ADC that is 2 bits, what we are able to do is to quantize the range, ground, and VCC into 2 to the 4 steps, or four steps.
- Two of which connects to VCC and the ground respectively.
- The center pin connects to one of our analog inputs, in this case pin 2.
- To read from pin 2 we can write the C code pin mode 2 equals input.
- The range of value for this 10-bit ADC would be from 0 to 1,023.
- For a chip with a VCC equals 3 volts, what does 512 mean here? So to do this, we can look at the full range from 0 to VCC corresponds to 0 to 1023.
- Suppose we’re reading a value of 512 over the entire range, 1,023.
- A thermistor It’s a resistor that changes its value based on temperature.
- Assuming that we have a 10-bit ADC as before and R1 equals 20K, if we are receiving a value of 340, what would be the temperature that we are reading? So for this question, we can first compute the proportion of 340 over the range which is 340 divide by 1023.
Week 3: Internet of Things > 3b Embedded Systems 2 > 3b Video
- Next, we are going to talk about a very important topic, embedded systems called interrupt handlers.
- Interrupt handlers is important because the physical world is essentially event-driven and we need some way to be able to respond to these events by executing pieces of code.
- These piece of codes are called interrupt handlers.
- When this interrupt occurs, the hardware can jump to a piece of code in this timeline shown here.
- When the event occurs, the program encounter for the microcontroller will jump to the location of the interrupt handler.
- It will execute the interrupt handler and return back to the user program after completion.
- Next, let’s set up our interrupt pin, pin 7, to be an input.
- The next code here set the interrupt to be a rising type.
- This means that the interrupt would happen if it sees a low to high transition.
- Let’s set the interrupt vector for GPIO 7 to point to the actual function that’s executed when the interrupt is happening.
- So what this code does here is that if interrupt happens, the microcontroller will jump to the location of function foo.
- After we set the interrupt vector for 7, we can enable the interrupt vector by using the pseudocode interrupt enable GPIO 7.
- The way to trigger this interrupt is to tie pin 7 to VCC. In this case, we are connecting pin 7 to this button, which would be tied to VCC when the user press the button.
- When the interrupt is triggered, pin 8 will rise from 0 to VCC, and therefore trigger the bell to be rung.
- We can do the same thing using pooled I/O, which means that instead of using interrupts, we can continue to monitor pin 7.
- In the interrupt case, this is very similar to the example we have shown before.
- Now let’s suppose that we are resetting the value X to be 0 inside the interrupt handler.
- If an interrupt happens in the middle, the interrupt will set X to be 0.
- So in this case, the correct value for X after the interrupt happens should be 0.
- As we can see from this timeline here, if the interrupt happens while the microcontroller is executing these sequence of codes, what will happen is that even after returning from the interrupt handler, X would still equal R2, which would be the previous value incremented by 1 instead of 0.
- To remedy this problem, what we need to do is to disable interrupts when we have a statement that is trying to modify the same shared variable.
- What we need to do is to first divide this clock into a slower clock, and this is called a clock divider or prescaler.
- If that value equals the value we set in a compare register, it can trigger an interrupt.
- So we can use the function setPrescalar clock 1X. And what this does is to slow down the clock by a factor of 2 to the X. There are also many different interrupts associated with clocks.
- We have counter register overflow, which means if the clock register overflows beyond a number of bits, it will trigger an interrupt.
- We also have a countdown register underflow, and we have a compare register interrupt.
- We are working with clock 1, and clock 1 has an eight megahertz base signal.
- Suppose what I would like to do is to execute an interrupt handler once every one millisecond.
- So if my goal is to trigger interrupt once every millisecond, I need to set the compare register to 1,000.
- I can set the interrupt handler to foo, initialize clock 1 to be 0, and enable interrupt.
- So with these lines of code, we can set up the system to tick or trigger an interrupt every one millisecond.
- In sleep mode 3, the microcontroller powers off everything besides RAM, the interrupt module, and the clock module.
- In sleep mode 1, it turns off everything besides the interrupt module, which means that the microcontroller can be wakened up by interrupts.
- In sleep mode 0, it turns off everything besides the reset interrupt, which means that it is only interrupted or it can only be awakened up by the reset signal.
- We also talked about interrupts, race conditions, and also clocks, timers, power states, and sleep.
Week 3: Internet of Things > 3c Interfacing with the Physical World 1 > 3c Video
- The goal here is to use the sensors connected with embedded systems to monitor the different types of physical phenomenons, and once we have that data, we can process it locally or send it to the cloud.
- In the cloud, we can apply different types of visualizations or data analytics to that data.
- Next, I will talk about a few types of sensors and actuators followed by amplifications and filtering.
- Finally, I will talk about how to visualize data in the cloud.
- If we need to transfer larger amounts of data or if there are timing constraints involved with this data, we have to use something called buses.
- This is the case when the control signals are initiated asynchronously, but the actual data transfer occurs synchronously.
- In this case, the master is trying to transmit some data to the slave.
- To initiate this transaction, the master will first pull the signal lying from low to high signaling to the slave that I have some data to transmit.
- Once the slave sees the signal change it will read the data from the data bus.
- It will then signal from low to high telling the master that it has gotten the data.
- Once the master sees the signal, it will change the line from high to low telling the slave that I see that you have gotten my data.
- Finally, the slave will change its signaling line from high to low telling the master that I see that you have gotten my data.
- It’s similar to I2C it is full duplex, which means that transmission and receiving data can happen at the same time.
- As you can see, the peripheral device, the HC1008, and the master device is connected primarily through the two lines here.
- The SDA, which is the data line, and SCL, which is the clock line for the I2C.
- The peripheral device stores the actual values using a set of registers, which are essentially memory elements that stores some data.
- In this case, we have on the top, the data line here, and on the bottom, this clock line, SCL. In this example, we are trying to transfer 8 bits of data from the master to the slave.
- The way that I2C is specified is that first, the master can specify a start condition by pulling the data line from high to low while the clock line is high.
- On every successive low edge of the clock the master can change the data on the data line.
- Data is read by the slave when the clock is high.
- So as we can see here, the clock line is low and the data is changed at this location.
- When the clock line is high, the data is kept constant so that the slave can read this data.
- So after a number of these transitions, at the end when the clock line is high, the master will change the data line from high to low signalling either a repeated start so that the next 8 bits can be transferred or signaling an end condition.
- The way that we can communicate data with the sensor is by manipulating the register map.
- First, to read a temperature from this device, we essentially have to set the address to be 00, and then perform I2C read. This will allow us to read the data from temperature register.
- Different from I2C is that SPI uses two lines for data, so that data can be transmitted in both directions at the same time.
- Once we have learned how to connect and interface with these sensors and actuators, we can then begin to look at how to choose the right type of sensors and actuators to connect to the physical world.
- We can find that the right sensors and actuators, and we can build a system and write the code.
Week 3: Internet of Things > 3d Interfacing with the Physical World 2 > 3d Video
- There are many types of sensors out there that allows us to a sense virtually any type of phenomenon in the environment.
- The sensors are really these objects whose purpose is to detect events or changes in the environment, and provide a corresponding output.
- We can use sensors to sense different types of sound signals, and we call those microphones.
- We can use sensors that help us detect electric currents, help us detect magnetic fluctuations.
- There are also sensors that tell us about positions, angles, displacements, acceleration.
- Let’s take a look at some of the most commonly used sensors in the IOT space.
- At the heart of this are various type of sensors.
- The initial measurement unit, or IMU, is a very frequently used sensor.
- It’s used to measure velocity, orientation, and gravitational forces.
- It’s combining three types of sensors in one package, and it’s often a mems based sensor.
- The accelerometer is used to measure acceleration in the x, y, and z directions.
- The gyroscope is used to detect angles and sometimes angular momentums.
- Other than IMU sensors, there are several kinds of sensors that are very useful for IOT applications, including proximity sensors, motion sensors, and ranging.
- PIR motion sensors use passive infrared to detect motion.
- Lidar is a kind of sensor that uses or projects lasers, and it’s used to map out a 3D environment.
- Ultrasonic Ranging is a sensor that allows us to detect distances using ultrasound.
- In addition to transmitting data at low power, BLE gives us another way for proximity detection.
- For internal things, there are also many very interesting sensors in the environmental space.
- We can use sensors that detect sound or noise.
- We can have sensors that detect temperature, humidity, radiation, or even air quality.
- Keep in mind that many of these sensors have noise.
- When working with sensors and actuators, we often find that these devices often do not directly match to the embedded systems.
- In a previous lecture, I talked about how to interface with a resistive sensor by using a voltage divider.
- This can also be used to reduce voltages from sensors that output a voltage that’s beyond the range of our embedded system.
- Sometimes the sensors output a very small signal, and in those cases, we have to connect that to amplifiers to increase the power.
- Once we are able to interface with the sensor and actuator, we often find that we cannot directly work with the data.
- Using a frequency filter as an example, suppose I am interested in the 2 Hertz component of a 10 kilohertz data.
- One way I could do is to sample that data at 10 hertz.
- Suppose that we want to filter out the DC drift from a sensor.
- Analog filters are often more efficient and consumes less power.
- So there are many different types of ways to work with data.
- Once we have the data we want, we can visualize that data in several different ways.
- One way to do this is to download this data and import that to some software, such as Excel and Graph It. However, it’s much more useful, in many cases, to visualize this data in real time and in the cloud, so that it can be accessed everywhere.
- The next, let’s use a very concrete example and let’s see how easy it is to visualize data.
- I’m connecting a very simple temperature sensor to a Raspberry PI using SPI bus.
- In the wire loop, the first thing I do is that I read out the sensor data by issuing a readadc.
- This readadc is essentially reading data using SPI bus from the temperature sensor.
- Once I have the sensor data, I can send that to the stream by using stream.
- I’m reading data from this temperature sensor once, or 10 times a second.
- We talked about some of the most commonly used sensors and actuators, some simple ways for amplification and filtering, and finally, some simple ways to visualize data in the Cloud.
Week 3: Internet of Things > 3e Energy Harvesting 1 > 3e Video
- We expect in the future there will be more and more devices that can simply harvest their own energy from the environment, from light or from motion.
- The time in which the shared medium causes a problem is when multiple devices are trying to simultaneously transmit.
- There’s no problem with having devices simultaneously listen.
- What these protocols do is they coordinate when these various devices can send such that their transmissions don’t interfere with one another.
- The benefit of these former environments, when they weren’t concerned about energy, was that they didn’t have to worry about devices turning off and sleeping.
- These days, in order to save energy, what devices typically do is go to sleep.
- So sensor networks are, with respect to the tags that we’re talking about, large devices.
- So while it’s sleeping, the device will be accumulating this energy at rate E. And then when it’s transmitting or listening, that energy budget will slowly be depleted because of the transmitting and listening.
- We’re going to assume that these devices are close enough so the propagation delay, which is a problem in practical systems when devices are physically far apart, is not a factor here.
- OK. So what we want to do is we want to take this collection of N devices and just ask, at what rate are they able to transfer data from one device to the other? So in English, we want to maximize the overall long term rate that data is transferred from one node to another.
- The constraint that these devices are going to have to satisfy is they can only listen and transmit a fraction of time that keeps them within the constraints of their energy budget.
- So we’re going to define some random variables, xi of t, li of t, si of t. Their values change with time, which is t. And xi of t is corresponding to device i. It’s saying that when xi of t equals 1, it means that device i is transmitting at that time, t. And so the variable will have the value 0 otherwise.
- Again, our CSMA assumption from before was that only one device can transmit at a time.
- Multiple devices are allowed to listen at the same time.
- Si of t will equal 1 when some device other than device i is listening at time t. We can describe that in terms of the li for the other set of devices.
- So we have the product over all devices, j, that are not equal to i. Here we have lj of t. So that’s going to equal 1 when device j is transmitting.
- So 1 minus that is going to be 1 when that device is not transmitting.
- So si of t will equal 1 when 1 minus no other devices are transmitting, or some other device is transmitting.
- So si of t means some device other than node i is- I kept saying transmitting, but I meant listening.
- The maximization objective is to maximize the fraction of time in which some device is transmitting and it’s sending information, and somebody else is listening.
- What we’re looking at is, over all possible devices i, that at time t, device i is transmitting and some other device is listening at that time as well.
- Then we’re integrating over all the possible times where some device i is transmitting and some device other than i is listening.
- So it’s the fraction of time in which we have some device transmitting where some other device hears it.
- We’re saying that when a device i is transmitting at time t, it’s consuming energy at a rate x. And when a device i is listening at time t, it’s consuming energy at a rate l. So this is the rate inside these two parentheses.
- So that’s how we’re constraining these devices by their energy budget.
- So that’s the general optimization problem for maximizing the rate at which devices can transmit from one to the other when they’re constrained by some energy budget.
- So the way we usually deal with upper bounds is we say, suppose we have some oracle who can decide when these devices transmit and listen, basically schedule them.
- It’s going to do it in such a way that it’s going to maximize the objective function, satisfying the energy constraints of each device.
- So the first observation we’re going to make is we’re going to assume homogeneity, which means that all devices are homogeneous, that they all have the same behavior, just at different times.
- So there are n of these devices that basically are the same.
- The oracle has to coordinate these similar devices- turn one on to listen, turn one on to transmit, and at some point turn them off because they’re consuming too much energy and turn on a different set of devices.
- All we’re saying is we can look at the problem and say, you know, it would never be the case that a device would be listening unless another device is transmitting.
- Sort of the dual, no one’s going to transmit unless another device is listening.
- So why would I waste my energy transmitting if no other device was listening at that time? So in the upper bound, the oracle would schedule these devices in such a way that if one device is listening, another device is transmitting.
- We only really need two devices turned on at any time, one being a listener and one being a transmitter.
- So at any given time, if some devices are awake, observation 2 means that there will be a listener and a transmitter.
- Now we’re going to apply the homogeneity observation, or we’re going to assume the devices are all homogeneous.
- Every xi equals some x. Every li equals some l. And so my maximization problem goes from summing over n different xi to just n times x. And subject to I don’t have to do this constraint for each different i, it’s the same for every i. The fraction of time I’m transmitting times the transmit cost plus the listen fraction times the listening cost has to be less than or equal to e. And then finally, we had this efficiency observation, which was whenever a device is transmitting, some other device is listening.
- So if I take all the fractions of time the devices are transmitting, every one of those has to be met up with some device listening.
- So I’m just going to set this constraint where x times x plus x times l equals e. So that gives me a formula for x. And my throughput, which was just n times x, is just n times that formula for x. So an oracle, given these devices, you have an energy budget of e and a cost x to transmit, and a cost l to listen, where l and x are bigger than e. The highest throughput you could possibly achieve from that system is n times e over x plus l. So quickly, the more devices you have, the larger the throughput is going to be, more frequently you have devices on.
Week 3: Internet of Things > 3f Energy Harvesting 2 > 3f Video
- As the transmitters and the listeners are moving from tag to tag, we just want to maximize the rate at which information can be moved around.
- So now when a device transmits, we’re not just interested in having one other device receiving that transmission, we’re interested in having as many devices as possible receiving that transmission.
- OK. So in some sense, a device is transmitting, information is being broadcast.
- We want to maximize the rate at which information is being sent over several receivers instead of just from one device to another device.
- So if someone comes in and pulls out a box and that tag disappears, the other tags are going to realize, hey, our neighbor disappeared.
- Because of the limited energy budget, tags are going to have to sleep, and then wake up and transmit, and at other times, wake up and listen.
- You have to coordinate all this sleeping, and transmitting, and listening to maximize the rate at which they all maintain connectivity and know about one another.
- What is our maximization objective? We want to maximize the time during which some node is transmitting.
- When that node is transmitting, we want to count the number of devices that are actively listening.
- So we’re summing up over all devices that are in their listening mode.
- OK? And for that sliver of time, t, the rate at which we’re saying information was communicated was proportional to lj of t when there’s this device transmitting.
- We’re summing up over all devices being the transmitter, looking at how many other devices are listening to that device, looking at the amount of information transmitted over fixed interval tau, dividing that amount by the interval length, so that we have the fraction of time, or the fractional amount, of which information is being transmitted.
- So we’re looking at the long-term, average rate at which they can sustain devices broadcasting, and how many other devices are receiving that information.
- Same transmit cost, same listen cost, same energy bound.
- Again, we’re going to apply this observation, too, where because we’re dealing with an Oracle who can schedule things in a smart way, again, no device is ever going to listen unless a device is transmitting, because otherwise, that listen energy would be wasted.
- No device is ever going to transmit unless another device is listening, because then that transmission energy would be wasted.
- The efficiency observation is a little bit different, because we have multiple devices now.
- We had this observation three that only one device would be listening at a time.
- For this measure that we want to achieve this time, actually, you want as many devices listening as possible.
- As we said before, we know, for the upper bound, there is always a transmitter when some device is listening.
- So similar to what we did before with we removed listening, this time, we’re removing transmitting.
- So if you recall, for the other optimization objective, we wanted to maximize the sum of transmit fractions.
- So what does that mean? If I take the fraction of time that I’m going to listen, the fraction of time that I’m listening has to be less than the fraction of times over which somebody else is transmitting.
- Right? If somebody’s transmitting only a third of the time, perhaps sliced up among multiple transmitters, and I’m going to be listening, I can only listen for, at most, a third of the time, because that’s the only amount of time that’s available to me when there’s a transmitter.
- So node I’s listening time has to be less than or equal to the sum of the other nodes’ transmitting times.
- This sum of j not equal to i there are n devices, and I’m looking at all devices that are not equal to i. This, again, has n minus 1 components.
- So it’s just n minus 1 times x. OK? So our simplified maximization objective is maximize n times n minus 1 times l, subject to both the energy constraint and this property of listening.
- This was maximizing, after the observations that we made, maximizing the rate at which devices communicate pair to pair.
- That was maximizing the rate at which devices broadcast to all other devices.
- We did some work to achieve some additional bounds on the problem based on observations of when devices would listen and transmit.
- So what did this analysis that we did teach those of us who are interested to work in energy harvesting? Well, we could now say we have some sense of if you give us a bunch of homogeneous devices with these energy constraints that can be specified in terms of how much it costs to transmit, how much it costs to listen, and how much of an energy budget the device has, we can now bound the upper rate on which devices can transmit or listen.
- One of the parameters you can play with is how much effort a device puts into transmitting, how much power it uses to push its transmission, so it can be received by more devices or more easily by devices.
- OK. And then finally, we made an assumption about homogeneous devices.
- They might all have different listen, transmit, and energy costs.
- Observation one where we said all devices are homogeneous, take that observation out, and you can ask yourself, how do we solve that more complicated problem? OK. And we have these formulas.
- You can play around with these formulas and see, well, what kind of throughput can I get if I have this number of devices or if I change my listening costs? As I adjust my listening costing linearly, how does that affect the throughput that I can get for these two specific throughput metrics? OK. Thank you very much.
Week 3: Internet of Things > 3g Ultra Low Power Computing in VLSI > 3g Video
- Those physical devices, in order to be connected to the internet, need ultra-low power computing hardware that is attached to them.
- The key in designing this ultra-low power computing hardware is to make sure that the hardware to be very energy efficient for performing all of those tasks.
- Then why we need such an ultra-low power dissipation? The foremost important reason is battery life.
- Ultra-low power computing can extend the battery life from days to months to years.
- Very low power dissipation allows us to use a much smaller battery, which is very critical to scale system size and the cost.
- Generally, microwatt to nanowatt average power dissipation is desirable, although the specific target power dissipation depends on the target battery life, as well as the system size.
- In addition to the battery life, power supply design and heat removal are two other reasons that ultra-low power consumption is necessary for the hardware of the internet of things.
- So the goal for the ultra-low power computing hardware is to minimize power dissipation while meeting a target throughput.
- So let’s discuss some of the metrics to show those power dissipation, as well as the throughput.
- One is the switch power dissipation and the other is the leakage power dissipation.
- Switch power dissipation is the power that the hardware consumes when it performs useful computation.
- As a power, it can be represented as the common power metrics, such as milliwatt and microwatt.
- You can also use normalized power, which is microwatts per megahertz, for example, where the megahertz is representing the clock frequency of the computing hardware.
- On the other hand, leakage power dissipation is the power that hardware consumes even if it’s not performing any useful computation.
- As a power metric, we can use the power metrics such as microwatt, nanowatt, and picowatt.
- Typically, the ultra-low power computing hardware clock frequency is in the range of tens of megahertz to hundreds of megahertz.
- So based on those metrics, we surveyed some of the recent ultra-low power computing hardware, especially microprocessors, from both research beakers, as well as the commercial off-the-shelf product.
- In this slide, we can explain the mechanism of the switch power dissipation.
- As you can see, during this process, the output capacitor here is charged from the power supply here through the PMOS transistor.
- So we can calculate the energy consumption, or energy supplied from the power supply, as well as the energy dissipated in the PMOS and the energy stored in the capacitor.
- The energy supply from the from the power supply, which is e supply, can be calculated as the integration of the power over the time.
- The power is the multiplication of the supply voltage and the current.
- So power dissipation is actually energy dissipated in the PMOS multiplied with the late of the input change, meaning how open the input change is for unit time.
- Now let’s discuss about the leakage power dissipation.
- Leakage power dissipation is actually the result of the three non-idealities of the transistor.
- So let’s briefly discuss about several techniques to reduce the power consumption.
- One of the most popular ways to reduce the power is to design a circuit such that it can operate at very low supply voltage.
- If you can do that, then your power consumption can be dramatically reduced because switch power consumption is the quadratic function of the supply voltage.
- The leakage power is also strong function of the supply voltage.
- As you can see, the energy consumption, if you scale your supply voltage from 0.9 volts to 0.55 volts, you can reduce the power from here to here, roughly.
- Another popular technique to reduce the power dissipation is to use pipelining technique together with the low voltage circuit.
- This can maintain the throughput but can reduce the power dissipation significantly.
- The small supply voltage can reduce the switch power consumption quadratically.
- If you pipeline circuit from one stage to two stages, you expect to have about four times less power dissipation without losing any throughput.
- The last technique that I want to briefly introduce is the technique to reduce the leakage power, which is called power gating.
- Ultra-low power computing hardware is an essential component in our internet of things technology.
- The goal in designing such hardware is to minimize the power dissipation at a target throughput.
- We saw that the total power dissipation is the sum of the switch power and leakage power dissipation.
- We also discussed about the fundamentals of those power dissipation.
- We introduced several design techniques for reducing power dissipation, while many more are being explored currently.
Week 3: Internet of Things > 3h Hardware for Machine Learning > 3h Video
- In this segment, we’ll study about the hardware which can learn and perform cognitive computing tasks.
- So what is cognitive computing? According to the definition, cognitive computing is something that is related to self-learning, data mining, pattern recognition, and natural language processing for mimicking the way that the human brain works.
- Basically, what it does is it learns the temperature setting pattern of the user.
- Based on the learn patterns, it sets the temperature across days, across months, across years.
- The first phase is the learning, and the second phase is the post-learning processing.
- So learning- if a hardware can learn or system can learn without the supervision of the human [? help ?], then it is called non-supervised learning.
- Otherwise, if hardware or system needs supervision from humans, then it is called supervised learning.
- We can also consider the learning as online if the hardware that is learning is actually performing the post-learning processing as well.
- On the other hand, it is considered as offline learning if the hardware that is trying to learn is different from the hardware that is for post-learning processing.
- The third type of post-learning processing is regression, which tries to find the patterns in the past data and try to predict the future based on the pattern he found.
- So in order to have the system to learn and also perform post-learning processing, researchers have come up with many different algorithms, which include support vector machines, logistic regression, K-means, Q-learnings, and so on and so forth.
- Among those many algorithms, neural networks is one of the most important algorithms, and particularly for hardware implementation.
- Finally, neural network algorithm has a certain degree of tolerance against manufacturing faults, as well as the [INAUDIBLE] noise, which is important, also for hardware implementation.
- So for the last two or three decades, researchers have proposed or prototyped several important neural network hardwares.
- The first, shown here, is the ETANN, or Electrically Trainable Artificial Neural Network.
- It uses the gate for storing synaptic memory, and also, analog adders and Gilbert multipliers for neuron computation.
- The system consists of 64 processing nodes, or PN. And there’s 8-bit input and output buses that are shared by 64 processing nodes.
- If you look at each processing node, it contains adders, multipliers, and a 32-by-16 bit register file, and also, a local 4 kilobyte Random Access Memory, or RAM.
- Large-scale neural networks require a significant amount of synaptic weights to handle real world problem.
- So you can iterate different algorithms and different architectures and different learning [INAUDIBLE].
- The researchers pointed out that it requires about 50,000 chips to emulate 1 billion simple neuron networks.
- More recently, there has been a large interest to implement neural network in field programmable gate, or FPGA.
- The flexibility of the FPGA is very useful to iterate different algorithms and different learning [INAUDIBLE].
- I want to introduce a little bit more recent development, which is neural network based on spiking neurons.
- Since firing can be represented with a 1-bit signal- firing or not firing- this is very different from a conventional artificial neural network design, where neurons produce a multi-bit output.
- So this architecture is generally requiring less amount of the hardware, and is useful for- maybe attractive for hardware implementation of the neural network.
- In this learning rule, basically, the timing of the fires of the different neurons determines the synaptic weight between the neurons, among the neurons.
- Finally, I want to introduce another version of the spiking neural network, which is try to reduce the power dissipation.
- Cognitive computing may enable hardwares to learn from data and to solve some problems without human help.
- It consists of the learning and post-learning processing.
- Among various algorithms, neural networks are the most attractive for hardware implementation.
Week 3: Internet of Things > 3i Cloud Robotics > 3i Video
- Primarily in the last 50 years, robots that have had the biggest impact have been robotic manipulators.
- Think of them as robot arms, collections of links connected by joints, primarily in factories, in assembly tasks, manufacturing.
- The main goal for a robot arm is so that a person can program it to execute a trajectory, and then have that robot repeat the trajectory again, and again, and again as needed.
- The differentiators for this kind of technology- when a company builds a new robot arm, the way they differentiate themselves is by making them stronger relative to the mass of the robot, making them faster, making them more precise.
- Fundamentally the goal remains to have those robots be programmed to execute a specific trajectory, and then do it again, and again, and again.
- These days we are seeing all of a sudden a lot of interest in a different type of robots, namely mobile robots.
- The other new frontier for robotics, the one that is perhaps of more interest for our topic today is to move robots out of factories, out of fully structured environments, and put them in unstructured environments.
- Robots would- robots interacting with specific objects would need to be able to handle a wide range of possible objects.
- So why is this so difficult? Why don’t we have robots yet in this kind of unstructured environments? Let me give you a couple of examples of very hard problem, problems that are preventing robots today from successfully operating in all of these environments.
- The robot needs to be able to identify specific objects using its on-board sensors, and then plan a course of action.
- A lot of information out of which the robot must discern specific objects and understand the setting, understand the illumination, and in general deal with the environment as it’s seen by the cameras.
- A robot operating in a home would need to understand where it is in the home.
- The robot must able to tell whether it’s- what it is, whether it’s the morning, the evening.
- So any plan that the robot makes needs to be executed in a 30 dimensional space.
- That’s a very large space that a robot needs to plan in.
- Robot sensors are imperfect, especially three dimensional sensors.
- So what is this object that the robot is looking at? What possible shape could it have? There are many shapes that do fit the data, the incomplete data that the robot is seeing.
- The robot needs to plan in such a way that whatever actually chooses to execute will work equally well in any of these cases.
- What does cloud robotics mean? It is a robot connected to a remote framework, which offers a number of resources.
- Now we can leverage computation, we can leverage storage that’s available to the robot in the cloud.
- This requires a level- this requires storage and computation at a level that are not available on-board for today’s robots.
- We’ve all encountered the opposite scenario where a human doesn’t know how to solve a problem and calls for help, only to encounter a robot at the other end of the line.
- The robot has failed to come up with a plan for the situation it’s confronted with.
- Let’s assume that the robot is looking for one specific object in the scene, and it’s not able to identify it.
- The key idea here is that when the robot is connected to a human operator through the cloud, it’s not always the case that the human has to take over every minute detail of the robot’s behavior.
- This is the object that you are looking for, or this is where you should be positioning the gripper in the scene, and the robot can still complete the rest of the task autonomously.
- If you have a mobile robot driving around an environment, obviously in order to access all of these resources it has to be connected somehow, and that connection will have various characteristics.
- For example, what characterizes the connection between the robot and the cloud.
- If the robot is being tele-operated by a person that needs to see the world through the robot eyes, move, and then see what happens, then you would need a low latency connection.
- If the robot is just needs to upload a little bit of information and then wait for a cloud computing process to end, and then get back the result, very low latency isn’t a must.
- Obviously we’d like to have a high bandwidth connection between the robot and the cloud.
- If bandwidth is low, you still have the option of the robot preprocessing its data stream locally, and then sending to the cloud only the relevant bits that are needed for the processing that will take place in the cloud.
- Think about a robot navigating around an environment and having gaps in connectivity.
- What happens then? Well, if the robot is still able to operate autonomously, it can recognize where there are gaps in coverage, collect the information that’s needed, and then navigate back to somewhere where it does have connectivity, upload the information, get back the result that it needs, and then go back and finalize the task.
- If a cloud connection is needed for the robot navigate, and the robot wanders in an area where it doesn’t have connectivity, then it’s obviously stuck there.
- So if there is enough autonomy on-board, the robot is able to handle gaps in connectivity by simply recognizing that, driving out, and operating in a place where it does have the connection that it needs.
- In particular, a cloud can offer the robot access to large scale computation, large scale storage, and even human operators.
- All of these can help the robots make this transition from well-specified repeating problems to new unstructured environments where they have to make sense of the world around them and react accordingly.
- It was introduced in 2010 by James Kuffner in the paper referenced here, which also contains a lot of information about the potential for the cloud to help robots make this transition into new domains and new applications.
- Then for a very recent survey of the state of the art in the field and what might be the application domains that are closest to us where we will see cloud-connected robots deployed sooner rather than later.
Week 3: Internet of Things > 3j IoT Economics 1 > 3j Video
- Now, it’s an article of faith that platforms are going to be important for the internet of things, just as it has been in several industries that have been influenced by the internet.
- So there is every indication that the internet of things is going to be influenced by platforms.
- Platform economics allows you a inside view of the dynamics of platforms, such as the role of pricing, strategy, and so forth.
- So the high point- where we want to end up today- is understanding platform economics.
- So as you can see from the top of the slide, platforms are the center.
- By the way, that is why platform economics is often referred to by economists as two-sided markets.
- The statement out there about one group’s benefit from joining a platform depends on the number of agents from the other group who join the same platform.
- You have, as the platform, the broadband service provider and the core network ISPs.
- On either side you have end-users, like you and me, which we will map as buyers.
- A second example is where the platform is the TV channels and newspapers.
- More examples- the gold standard in platforms is really Apple.
- Apple’s iOS sits in the center of the platform.
- So Apple makes a very good living through this platform business.
- A third example is Uber, riders on one side and drivers with cars on the other side.
- The internet economy is rife with examples of platforms acting as middlemen or matchmakers between two sides with different needs and offerings, with the benefit to one side depending on the number on the other side.
- Now, the internet of things will no doubt offer tremendous opportunities for novel platforms to emerge.
- Now finally, platform economics or two-sided markets- as economists call it- offer a framework to understand the dynamics and dependencies in platforms.
- So let’s now look at several issues around why we think that platforms will be related to the internet of things.
- The first example here is Verizon’s ThingSpace IoT platform.
- It says that ThingSpace is a platform aimed at simplifying affordable connectivity- affordable, of course, is a key word here- in smart cities, health care, agriculture, energy, and the sharing economy, all the areas where you expect IoT to be very influential.
- According to Verizon, developers who want to create and connect IoT devices had needed to work with multiple companies.
- The web-based ThingSpace platform aims to be an easy way- a gateway- to IoT with APIs, partner development kits, tests and deployment services, et cetera.
- Of course, a lot of these things are based very much on how Apple managed to get platforms off the ground in its business.
- Connectivity has many shapes and forms in the IoT space.
- It aims to provide the IoT network core, but specifically matched to the scale and the bursty traffic of networks, of sensors, and connected devices.
- For IoT, you know that you have a large number of small devices and the data is very bursty.
- To compete in price with Wi-Fi and ZigBee- which is a big challenge for Verizon with a cellular network- Verizon has announced a Secants chipset.
- Now Verizon has several partner companies with which it is discussing the role of IoT platforms across industries.
- It has partnered with Renaissance, Intel, and others to create Verizon-branded IoT platforms and cloud-connected devices.
- Its revenue from IoT and Telematics solution is of the order of $500 million annually.
- We look at an example of Verizon and Intel getting together for an application- what very reasonably can be called “agricultural IoT.” And in this particular example, there’s 1,000 acres at the Hahn Family Wines in Monterrey, California.
- Whereas, with sensors and IoT, the control is closed-loop.
- That information is carried over Verizon’s network and pushed out to the cloud and all used for management of car-sharing.
- Another quantum leap, Nest is evolving to being a data-gathering platform in homes.
- My final example here is an evolving business in the internet of things, again, very much in the platform model.
- As platform here, you have an interconnection network.
- We’re simply going to assume that platforms and their underlying economics are going to be important for the future IoT.
- Because admittedly, we do not know the precise forms that these platforms will take in the future, who the buyers and sellers will be, nor the specific technology that will be employed.
- It doesn’t deter us from trying to understand platforms and platform economics.
- So we’ll be delving into platform economics, the models, and analytic approaches that are used.
Week 3: Internet of Things > 3k IoT Economics 2 > 3k Video
- You should read it and think about it in your spare time.
- Now consider a platform owner who is adjusting prices to buyers and sellers.
- If it raises prices to buyers, then obviously fewer buyers will join.
- If nothing else changed, this effect will depend on the buyers’ priced elasticity of demand.
- Since sellers value the platform less if there are fewer buyers, fewer buyers will join at the current price for sellers.
- With fewer sellers, buyers will also value the platform less, leading to a further drop in the buyers’ demand.
- The fact of an increase in price on one side is a decrease in demand on its side because of a direct effect of price elasticity, and on both sides from indirect effects.
- So let’s look at the first concept, price elasticity of demand.
- So price elasticity of demand is very similar except instead of looking in absolute quantities, it looks at percentage changes.
- So we’ll denote by epsilon subscript j the partial derivative of yj with pj, where yj is the demand of good j and pj is the price for that good, unit price.
- Then you have this factor of price divided by demand.
- The motivation for this definition comes in the next line where infinitesimal terms is the ratio of the percentage change in demand divided by the percentage change in price.
- Or another way to look at it, what is the percentage change in demand for a specified infinitesimal change in price? Now for downward sloping inverse demand functions, which is all we’ll be interested in, meaning that increased prices lead to reduced demand, the plasticity of demand epsilon j is negative.
- The horizontal axis is y. That’s the demand.
- The first observation is this price elasticity of demand is a local measure.
- In other words, along a demand curve for two different points on the same curve, you should expect different price elasticities.
- The other thing is that when the slope of that price demand curve is very steep, as in the figure on the left, then the absolute value of the elasticity is small.
- The reason for calling it very inelastic is that for a large change in price, the corresponding change in demand is relatively small.
- It’s very elastic because a small change in price amounts to a very large change in demand.
- Now revenue is closely related to demand, but also different because revenue is price times demand.
- What’s shown in that equation there is that the partial derivative of revenue with respect to price is given by the expression on the right-hand side.
- The takeaway is that when the absolute value of elasticity is greater than 1, you have a behavior of revenue with respect to price change that is very different from the case when the absolute value of epsilon is less than 1.
- What happens in that case is that when you increase prices, demand decreases naturally.
- Correspondingly, if you were to decrease prices, the demand would increase and, interestingly, revenue would increase as well.
- Of course, it’s not always obvious that revenue will go the same way as demand does because the prices are different.
- This is a special case of constant price elasticity of demand.
- So although in general elasticity is a local condition, only here what’s true is that the demand depends on price with a value of elasticity that doesn’t change as the demand changes or as price changes.
- The demand y is equal to a constant a divided by p, which stands for price, raised to the power of the absolute value of epsilon where epsilon stands for elasticity.
- What you see there, each point indicates for a particular year the price and the demand for DRAM in the picture on the left.
- Each point corresponds to a particular year demand and price.
- Examples are more subscribers to a communication of social media service, the greater is the subscribers’ utility.
- That’s denoted simply as the sum over j of wij, which is the incremental utility to a person i are being able to communicate with person j. And the j is being summed over all subscribers to the service.
- So wij is the same for all j and only depends on i. And the third assumption, say, is that if you look at each individual i, his or her incremental utility is not a point value.
- The first one is that the marginal customer, which is to say the customer that sits as a separating point between the subscribing set and the non-subscribing set in a population of, say, n people.
- So the figure at the bottom of the slide shows you- is a plot with f on the horizontal axis and the willingness to pay of the marginal customer- that’s to say, nf times 1 minus f on the vertical axis.
- The f ranges from 0 to 1, as you might expect, with 0, there are no subscribers to the service; when f is 1, everybody in the population is a subscriber.
- They’re denoted by b and c. So those are the only two solutions to the equation p is equal to n times f times 1 minus f. So what one can infer from this figure is that there are three equilibrium points- a, b, and c. a, corresponding to the origin is an equilibrium at that point because if there’s no subscribers, there’s no incentive for anyone else to be a subscriber.
- It’s being able to communicate with friends that bring value- that the service brings value to subscribers and not being able to call random people.
- A non-subscriber to the service at time n minus 1 was subscribed at time n if k or more friends are subscribers at time n minus 1.
- You see it takes- there’s a period of time which is reasonably- you can think about it as a gestation period, and then followed by a very rapid increase in the growth in the subscriber set.
Week 3: Internet of Things > 3l IoT Economics 3 > 3l Video
- For any price- and I’ll be referring to the two examples that you’ve seen, the Rawls model and the Gersho Mitra model, that for any price for the service- and p was explicitly that in Rawls model.
- Of course, to do that you will have to incur a temporary operating loss, but then gradually raise the price to cover costs and maximize profits to take you up that inverted parabola that you saw in that earlier slide.
- A market is two-sided if the platform can affect the volume of transactions by charging more to one side of the market and reducing the price paid by the other side by an equal amount.
- One is pricing and the other was network externality.
- So what you see at the top is the picture of buyer’s platform and sellers.
- We’ve also added buyer’s fee, which is denoted as A superscript B. And there’s a seller’s fee denoted by A superscript S. So the superscripts B and S will refer to buyers and sellers respectively.
- So the buyer’s fee and seller’s fees are like membership fees to be able to work with this platform.
- The marginal cost to the platform owner for each member on either side is indicated by C. The number of members is denoted by capital N. And the utility per member by U. And each of these quantities have a buyer version and a seller version indicated by the superscript B or S. So now, the model assumption here is a single buyer’s utility is given by bD times NS. So remember that bB is the benefit to buyer, to single buyer per seller.
- So bB times NS is the total benefit to buyer, single buyer, for all sellers.
- Because you have NS as the number of sellers and the product then gives you the total benefit to an individual buyer from the collection of sellers.
- So a single buyer’s utility is this quantity minus the fee that it has to pay out, which is indicated by AB. So that’s the buyer’s utility.
- Now, you think that as the utility increases, it would attract more buyers.
- It could even be a linear function that maps utilities to buyers.
- It says that the number of buyers could be the log function.
- The function fee, which maps utility to the number of buyers.
- That’s quite simply represented in this line as the fee minus the cost times the number of buyers and a similar quantity for the sellers.
- So the platform owner’s problem is maximizes quantity pi, the platform profit, with respect to the fees, which it sets, given by A superscript B and S for buyers and Sellers.
- The profit-maximizing fee for buyers given by A superscript B is equal to the cost of providing service, which is equal to C superscript B, capital B, adjusted downwards- it’s a negative term- by the external benefit, which is B superscript S, N superscript S. So this is the benefit brought to the set of all sellers from the presence of a single buyer.
- So that benefit works in favor of the buyer in reducing his or her membership fee.
- So continuing with Armstrong’s moral and interpretation of the main result, it turns out that the buyer’s elasticity of demand, if you fix the level of seller’s participation, is given by this quantity denoted there by eta superscript B. And the fact- there’s this vertical bar and then N superscript S. S superscript S. That indicates fixed number of sellers.
- That quantity is given by the partial derivative of the number of buyers with respect to the buyer’s fee times a product, which is the fee divided by the number.
- It turns out to be equal to fee prime of B divided by fee B times AB. Now, using this result for the buyer’s elasticity of demand and just simply substituting it in the main result, you get an equation which is extremely influential.
- In words, the quantity on the left-hand side is the buyer’s markup.
- The equation is stating that the buyer’s markup is inversely proportional to the buyer’s elasticity of demand for a given level of seller’s participation.
- So why is the quantity on the left of the equation referred to as the buyer’s markup? Well, the quantity in parentheses- the cost minus the benefit times the number- is really what’s referred to as the first best price.
- The quantity in the right is simply the inverse, 1 over the buyer’s elasticity of demand for a given level of seller’s participation.
- In which case, of course, the buyer’s are going to be subsidized.
- That is to say, if buyers bring a lot of benefit to sellers, then the implication is that the buyer’s price, optimum price, goes down.
- It may also happen that the fee to the buyer is less than cost if the buyer’s elasticity of demand is high.
- These new functions are D, capital D, and lowercase n. They have, as usual, superscripts, either B or S depending on buyers or sellers.
- In the case of D, the argument is price and the number of members on the other side.
- You might be interested in a change in the number of buyers holding the seller price constant.
- Look for the sensitivity of the number of buyers with respect to the buyer’s price.
- So partial derivative of capital D with respect to price is the buyer side elasticity holding participation by seller’s constant.
- The partial derivative of n superscript B with respect to P, buyer’s price, is the buyer side elasticity holding seller price constant, allowing participation to vary.
- It says that even if buyers are not particularly price sensitive, if externalities are strong in either direction, then the participation of biases becomes highly sensitive to the price that they are charged.
- So capital D and the partial derivative of capital D with respect to the number of members, buyers and sellers, is measuring the externality across the platform.
- What it’s pointing out is although the- just as I said in words, even if the same side elasticity is small, but if the externality is large, the net effect of a change in price can be very dramatic and very surprising.
- This, of course, applies both for buyers and sellers.
- Even a small response by a buyer to a price change will trigger a response by sellers, which in turn will produce a response by buyers, and so on.