TechCrunch has a post on Peep Wireless and their CES announcement of a new P2P wireless voice call system. I think going for a voice network using wifi is an admirable goal, but will no doubt have challenges. That said, there is no way to get things like this going without giving it a try.
You can not account for human behavior when it comes to stuff like this. I remember in the early 90's when people thought that multi-player internet gaming would never work because of the latency of the net. Well, what they did not count on was that folks would simply stay away from game servers that were to laggy. Same thing here, if you have queues from the UI that let you know when you can/cant use the P2P voice, you'll adjust your behavior if there is a significant payoff to using the system. There have to be enough goodies locked up in a system like this to hang in with it when its performance degrades.
I would like to see a small step before going full voice, moving short messages, images between phones in a store & forward mode, where each client/server device accepts files and forwards some of their own. Reminds me of Epizoochory(really? yep, really), or the dispersal of seeds by animals. Imagine all the interesting stuff that would drift around the planet(just encrypt what you want private) when passed by proximity as phones make and break connections in freeway traffic, at the mall, etc. What if your local venue could not only check-in to you, but could tag you with a file that spreads to your friends just by being near them?
NFC(Near Field Communication) will enable our phones to talk with things such as gas pumps, vending machines and cash registers, basically anything you might want to transfer money to.
NFC could also allow our phones to interact in new ways with old objects, like say, a door lock. You would hold your phone close to the lock while turning the knob. An electromechanical power circuit converts that turning force into enough energy for about about 300 miliseconds, or about 1/3 of a second processing time. A low power microcontroller within the lock accesses a connected NFC chip containing the locked/unlocked status of the lock. This NFC chip would receive it's power over the air through the short range RF interface with the phone's corresponding NFC device, relaying the unlock code to the lock's microcontroller. The balance of the doorknob turning force would then be used to mechanically move the bolt, opening the door.
An article over at TechCrunch by MG Siegler delves into Microsoft's approach to Phone 7 and how updates will deploy. Microsoft got the control they wanted with constrained spec's for manufacturers, but this also places the burden on them for tactical leadership in a way WinMobile never did.
MSFT will have to get the balance between new features and filling in missing pieces by pulling actionable signal from all the market noise, not easy to do when you are not actually building devices yourself. The timing for something like cut/paste has to be balanced with a differentiating new feature. Android does this with many eyes and many ears, a cross manufacturer, continuous snap to OS tip revision grind that aint pretty. The end result being consumers find at their semi-annual refresh window, the top couple Android fragments that are actually competitive with iOS.
MSFT has deep pockets and can spend time building up the OS feature set, but really has to break new ground to thrust forward and catch up in ways not yet seen from them in mobile.
Robert Scoble has an interesting post on content curation and the tools needed to allow users to more easily package stuff both in real time and in a meaningful way. It inspired me to think about how one could visualize relationships between content atoms(see Roberts link above). At the risk of stretching Robert's chemistry analogy, you could consider bonds between various content atoms to exist at different energy levels, with tweets being more real time or at higher energy levels vs blog posts at lower energy levels, not nearly as timely.
The browser depicted above would visualize content atoms and molecules and the relationships between them in real time. The colored blocks represent different atoms, each of which would receive a share of web crawler time which helps to determine the attraction and placement relative to other blocks. The bonds formed are based on location, time, and subject, along with direct user drag and drop.
I'd love to someday use Kinect(if Microsoft does the right thing and rolls it into Windows) motion capture of my hands and fingers, to visually mold the filter parameters going into Datasift. The output visual feedback from Sand would change in real time as liquid connections are made and broken between the constantly evolving thoughts and activities of others out there. Adjacent atoms in the display might side apart, as attention cools to black on a given subject, only to become attracted to other neighboring subject atoms associated with the next wrinkle in newsflow, stoking them red hot.