58 – Coordinated Smartphone 3D Capture

Taking the concepts of the last 2 days, I will take a step back.  Take all existing smartphone technology and create simply an app that does 5 things. 

1. It uses wifi and/or bluetooth to set up an exacting timing sych between 2 or more “connected” phones. 
2. It uses the accelerometer of each device to get a near exact reference to where each phone’s camera is “looking.”
3. It coordinates the 2 or more phones to take a collective group picture.
4. As a combined session, each smartphone uploads the images to a centralized server, where sofware creates a realistic 3D image.
5. The centralized server then shares this 3D image with all participating phones.

(A techy note – how I envision step 2 happening is once all phones are synched up, the software would direct the participants to place all 2+ phones on top of each other, oriented the same way, with particular phones in a particular place in the “stack.”  This way, when phones were then picked up and moved into position for the “shot,” the accelerometers all be used to provide their various approximate location data.  This is suprisingly accurate in todays phones – and the central server’s software would be able to fine tune the image from this initial data sampling)

This is beyond a cool idea.  A pretty limitless bit of technology.

I’m in for $1000 to making this a reality.

57 – 3D Imaging with your phone

From yesterday’s post, I will just consider the possibilities of having 4 HD cameras at the 4 corners of the back of your smartphone.  The first thing I can think of with this configuration is the ability to capture and quantify small objects in 3D.  Simply hold the phone within a certain distance of a small object and sampling from all 4 cameras, the phone would easily be able to determine the 3D layout of whatever the object was.  Using existing technolog combined in this way, such a device could easily find accurate sizes and quantities of whatever it was capturing.

Imagine the combination of such a “scanning” function with a 3D printing device, and you could literally capture the 3D image of a trinket on the street in Korea, and in moments be printing out an exact replica for you back home in New York.   The fact that it can indeed be incorporated into smartphones makes it a wild technological advancement when you consider the potential database of everyone’s captures.

Oh geez, my brain has me on a roll of thought, as tomorrow’s idea should demonstrate.  Let’s take it a step further!

56 – A Truly See Through Phone

OK, there are some ideas born from a need of great importance, which yield some substantial human improvement.  There are others which are just cool, and need to be done, just to be done.  Today’s idea comes after I observed some of the “cutting edge” apps available for smartphones these days.  Apps that give a virtual 3D view at the phone, spawned the idea in my head that it would be very cool if the view of the phone included more than just the option for transluscent windows, and icons over a background image.  What I would like to see, at least as an option, would be a translucent view of those same windows and icons over what is actually behind the phone.  That’s right, you put your hand behind the phone, and you can clearly see your fingers, as well as whatever is behind the phone.  Then another option that immediately comes to mind is the “hands free” mode of view, where you would only see the background, and you would not see anything immediately behind the phone, like the users hand.

To accomplish this, you would need more than just a single camera.  You would need a grid of cameras, and some intelligent software to combine the images into one single image.  I realize that this would be considered wasteful.  There are, of course, other purposes that this “camera grid” could be used for, but staying on the point of today’s post, I will just acknowledge this feature wouldn’t be for everyone.  Just off the top of my head, I would suggest that 4 HD cameras would be at each of the 4 corners of the phone, and 10 or so lower quality cameras would be organized in a grid between them.  The 4 HD cameras would be the basis for the view, and the bulk of the background image would be derived from them.  The advantage of having 4 would be that if you were in the “hands free” mode and did not want to see your hands in the image you would be less likely to block the view with your hand.  The software would know from sampling all 4 of the images that there was a hand blocking one or more of the cameras and it would grab the view from the corner or corners you were not blocking.

Now, in the more realistic “hands included” mode, the display of your actual fingers would be included as part of that background image.  This would be done with coordination of the 4 HD cameras, with the grid of “lesser” cameras.  The software would first use the 4 HD cameras to create an acurate image of what was behind the phone.  Then, all cameras would be sampled to determine exactly what, if anything, was blocking that view.  If you slide a finger directly behind the phone, the cameras will focus on it and using comparative software, be able to create an exact image of what that finger looked like, if the phone and LCD were actually a piece of glass that you were able to look through.

An additional neat function of this grid and software combination, would be the ability to set your phone down nearly right flat directly onto a printed page of paper.   As you sat the phone onto the paper, the image would be gained.  If you were to slide the phone onto another image, the act of that sliding would allow for the camera grid to effectively sample the page, and gain an accurate image, even if certain parts of the page weren’t able to be sampled at any given period of time.   Tiny built in LEDs would be used to light the close up image enough for functional capture.  These LEDs would not need to be bright at all.   Indeed, you could set your phone onto a desk in complete darkness, and if your screen was on, you would see with perfect clarity, perhaps the business card, it was resting on.

Again, not a societal necessity, but a really cool visual on your phone.  For that, you need to see tomorrow’s post on what REALLY mind blowing stuff this technology would allow.

55 – Tiered & Temporary Control Of A Musician’s Social Networking Pages – Nov 14 2012

Quick, fast and easy.  You are a musician.  You appreciate the reach of social networks, but do not have the time to keep after accounts on several different sites.  On the other side of the isle, is the die hard fan.  They would want nothing more than the ability to decorate, or update the page of their favorite band. Any established act has many such fans.  The problem is, Facebook, MySpace (did I just mention the m-word?) and the like do not allow for this to be handled in this way.  There are tiered roles on Facebook, for example, but decoration of the page is near the top admin level, (not that there is much decoration possible on a Facebook page anyway). 

The solution would be a decoration / content role, that would be available for certain users.  The band could allow these users the role, and they would be able to add content, decorate the band page, and a small addition built into the site could be at the lower corner of the screen, “XXX is the user responsible for the current page layout”  This could be done on a month by month basis perhaps, and could be set up as a reward for ultra fans of a particular musician.  The musician gets to have fresh content, and an easy to manage system of encouragement for fans to participate further.  The fan gets to have a really cool opportunity, if he or she is interested.  The social networking site would get to have increased traffic as a result, so it would easily be worth building the option into the API.   Everyone wins.  

Look to Tunegrow to be the site offerring this first in their 2013 launch, and in a dynamic and intuitive way.   🙂

54 – Bed Bug Free Travels – Nov 13 2012

This is a big deal for some folks, and surely some unpleasant stuff for the lot.  If there was a quick and effective way to eliminate the blood suckers from even being an issue, no matter where you were staying, I know some folks would buy in, but you would just have to prove it.  Perhaps you could prove that such a thing worked with a time lapse infomercial or just a time lapse youtube advertisement on the cheap.

Either way, it would go a little something like this.  You would bring a small travel bag, the size of a bowling ball bag with you into the room.  Inside it would be a special sheet, a 6 inch wide elastic/velcro belt, and a 110 plug in adapter.  Upon entering your room you would just pull everything off of your bed and set in the closet. You then take out the sheet, a california king size fitted sheet. This would fit a king and could also be used on queen, it would just have extra material. After covering the bed with this special sheet the hotel guest would then take out and apply the velcro belt. It is designed to wrap around the outside of the matress, pressing against the flat outer face of the matress. The sheet could have a velcro portion at various points to accomodate this belt, and make applying it easy for one person to do. After the belt was applied it would be connected to the 110v plug in, and that could then be plugged into the nearest outlet. The velcro belt would have a simple electric blanket type of heating wire running through it. Bed bugs, if present, would not be able to tolerate the raised temperature (heat is what hotel chains typically use to drive the bastards out). So you could be certain that once this device was setup on the bed, you could lay atop the bed and no bed bug would disturb your sleep.

I did consider a power outage or a failure to the heating element. The solution for the heating element outage or just a general power outage woud be an alarm that would sound when it sensed element failure. Now concerning the power failure, I know it’s possible to lose power. The solution here would be a small battery powered alarm built into the power adapter, such that if the power ever did go out for longer than 5 minutes the alarm would sound, and you wouldn’t worry about falling asleep and being attacked in your sleep if the power did go out.

This is not a product for everyone. This is a product for those who lose sleep at the idea of bed bugs and just the possibility of exposure to them.

Now, the sheets and blankets are a slippery slope, if you truly look thouroughly at the situation. You could easily put the sheets and blankets into the hotel dryer for 10 minutes and eliminate any would be hangers on, but you run into a problem when you fall asleep and a bit of blanket flops off the bed, providing a hypothetical ladder for some bloodthirsty traveller to navigate past your protection onto your carcass. Now, is it likely that they will be able to crawl past the heated belt area? No, but it is posisile. The solution would be to tuck the linens into the belt so that they would be far less likely to ever contact the floor,

The pillow? Well look, a pillow is a troubling bedfellow. A big bag of material that your typical foul organism thrives in. I would recommend having a microfiber pillow cover anyway, on any pillow you use, whether it’s at home or in a hotel. Just include 4 large cases in the kit, and instead of a zipper at the end, you would want a ziplock style seal.

Again, not for everyone, but someone asked me for a solution to this issue, and that would do it. I think it would be a viral video if you made a presentation including a underwear gilded maiden time lapsed on top of a bed which was specifically shown with magnified video to have been overrun with bed bugs. Then at the conculsion of the time lapse, you could show that none came near the heated belt region, the top region of the bed was clear of any, and it worked reliably.

There, I’m done with that topic. It’s giving me the creepy crawlies!

53 – A Refined Filter Is The Inevitable Result Of Startup Weekend – Nov 12 2012

So a friend of mine convinced me to attend a Startup Weekend – http://www.startupweekend.org . I don’t like talking in this forum but siffice to say, in the minutes before the event actually kicked off, I had met a fellow participant and while we were reading over the program documentation, I said to my new friend, “why wait for the weekend? There are people in every city coming up with solutions for issues. There are developers living all over the country. There are investors looking to be part of the next cool product. Why are we limiting the process to location. This entire process could be automated on the web and the connection wouldn’t depend on location.

The first thing that came to mind was the inevitable cherry picking of ideas that would occur if all the ideas from a startup weekend were to be posted online for people to scan through. No bueno. So the solution goes like this. Users would register for access, establish an account, a login, pay a fee and enter a specific, moderated discussion room. Within this room would be only the participants during a particular time frame or session. The ideas could be submitted via uploaded 1 minute video (which would reduce the dilemma of stage fright that hits some folks when they get a microphone handed to them and they are standing in front of 150 people). One at a time, the videos could be privately streamed to each participants screen, with buttons on the player making it easy to highlight, dismiss, or favorite certain ones. After the viewing, there could be a deliberation period and then a vote would be electronically gathered from all participants.

The top 10% of the ideas could be announced in an automated product, with contact information and team forming could begin from the group involved.

Here’s a huge value part of this model. During startup weekend, you do not have the exact team you want. You may need more developers, or a business developer, or a designer. There may only be one or two business developers in the building, and they are already on another team. With an online solution, you could build in a connection to a “freelance.com” type of site, where developers were standing by, for a fee or rate, and you could have an affordable solution on the team in a matter of a couple hours.

Now, you have a predetermined amount of time to get your presentations put together, again, the end result is a 5 or 10 minute video presentation.

A couple of problems with Startup Weekend, is that you have 5 minutes to pitch an unknown panel, who may not have any idea what you are pitching about, and in my case, you may not be able to conceivably even get them to comprehend the concept. It is a fact that the better decision will be made, the more informed the panel is. Specifically, if you had a 10 minute video, a website, a business plan, and some manner of a video representation of product demonstration, it would be more valuable than just a 5 minute presentation with a 3 minute Q&A.

Some other advantages to this model. You would be able to have focused events. Medical, academic, entertainment, and other folcused get togethers would bring a better, more porductive group of ideas and likewise you would have a better, more aware panel to be selling the finished product to.

I won’t argue that it is really cool to do white board brainstorming in person with your team, and it’s very cool to just disconnect from the normal grind and get lost in the process, in person. That being said, the advantages of a global monthly get together being available to those who were interested far outweight what geography would limit the average participant.

I may pitch this at a Startup Weekend. 🙂

51 – Using 3D To Create New Images – Nov 10 2012

Starting with yesterday’s post, I’d like to take a look at something waiting for us, right around the development corner.   Simply put, taking any number of images on any one thing, for instance several pictures of our president here in the United States, and loading them into the app/software and gaining 3D perspective and function.  The sole purpose, in this particular case, would be to be able to create new 2D images.

You take 2 or more pictures of the president, being from different perspectives, and load them into the software.  The software creates a 3D dataset for the image.  Then you pick your angle, click “render” and you get a brand new image of the president, that satisfies the following criteria –

1. It is from a brand new vantage, not necessarily supplied by the source images.
2. It is as detailed as the original source images, and would be humanly impossible to determine “created.”

This may hit your mind as a rather simple thing, but it has far reaching application.  Consider anytime you saw a well photoshopped image of perhaps a famous actor in some awkward situation, and you couldn’t tell it wasn’t a real photo.  This would be a lot easier to get away with if you could simply create any 2d angle of that particular persons face you wanted.  On a more troubling front, think of the value of photo evidence in criminal investigation, if images can simply be created out of thin air.

50 – One Click Image Copyright Removal – Nov 9 2012

So there will be advancements I bring up that some will not be a fan of. Take any picture on the web. In it’s existing form it is to some varrying degree copy protected. If you take the image and modify it by any digital method, it will retain either it’s original details( perhaps you run it through a color filter) or it will leave some digital evidence of it’s original self (perhaps you blur the image to some degree). To “lose” the copyright, you have to “lose” the image. This won’t always be the case. Our minds take the group of details that we see in front of us at all times and are constantly trying to make associations. Take a persons face for instance. To our minds it is a collection of shapes that our minds are easily able to recognize as a particular person, or that of a stranger. Similarly, when we look at an artistic rendering of a particllar person, we usually can see the similarities and as long as the artwork is at all decent, we immediately know who the artist is emulating. An extreme example of this is political cartoon work. Certain important details are greatly exaggerated, and yet you immediately know who the character is, being portrayed. So now look at all those images out on the web. The technology currently exists that would allow for a algorithm based process to be run against any image that would first determine what type of image it is, who or what is in the image, the orignial source of the image and it’s copyright status, and then simply remove that copyright by altering the image in any number of ways that retain the complete recognition of the original image, yet are completely unlinkable to that original work. Adobe has many methods of applying filters to images, yielding differing results, but as mentioned above, the process can easily be traced backwards, or it modifies the image to such a degree, that it can’t be recognized. How? Well a book could be written of all the possibilities. Perhaps the software recognizes that the picutre is a portrait. Most cameras can easily already recognize the faces in a picture. The greater determination can be made about the image, the better and quicker the software can make a change that accomplishes the desired result. As I see it there are 2 factors that give an image characteristics that are traceable. First, overlay. If you traced any image by hand onto a tranceparency, you would have a very distinct organization of points on both the original and the traced image that would tie the two together. Second, is individual pixel characteristics. For instance if you had two images of a forest, and you zoomed into one corner of the image and noticed the details of one particular tree branch were exactly the same, you would know you had a copy. So if you wanted to change an image to remove evidence of it’s origin, you would have to address both of these factors. The easiest way to handle the “overlay” problem is the randomization of the overall image, specifically small details like the proportions of the various features of the face. Subtle changes to the proportion would yield a face that would still be recognized. I have experimented with this on a small scale, on the computer and by hand, doing portrait drawings. The brain accepts a wild amount of subtle changes to the overall picture before it starts to register the image as “distorted.” To handle the second issue, the detail recognition, you would simply randomize the pixels individually, or as a group. This could be done currently, as easily as any Adobe graphics filter, but in future incarnations, as our software recognizes more details and is able to categorize them, (such as looking at an image of a tree and recognizing branches and leaves), the inevitable result will be the computer being able to do things like take in a picture of a forest and recreate it by perhaps replacing al the pine trees with weeping willows, or instantly turning a portrait picture of a white man into a black man, and doing it accurately. The ramifications of such things are quite far reaching, and I will cover more specifically in a future post. This is far simpler than most would think, and a one click solution would be pretty snazzy for folks who are just trying to have a particular visual and they don’t want to get snagged using some random web photo that perhaps isn’t marked as copyrighted and turns out to be. True, there are people out there making a living selling photos online, and I am not in the business of diminishing other’s work or value. I’m just in the business of looking ahead, to a world where they, like the musician, need to structure their price and availability at a reasonable level. One that encourages sales and not piracy. Another way to say it is having people buy your product because they want to and not because they have to.

49 – Universally Encapsulated Media – Nov 8 2012

Very simple.  Every song, every video, every news story, comes in a capsule of sorts.  A label/tracker and an ad allocation, would be combined with each individual song, video, or story; designed around the streaming model.  Each capsule of content could go out into the world freely and be placed into any legal player.  Preset ad choices would be assigned into the ad allocation, and would provide direction to the player, instead of the other way around.

Each time a capsule gets played, it reports back a central database, which organizes the data for the content creator and the advertisers.

A couple of things to mention.  Just because every bit of media comes with its own ad allocation doesn’t mean an ad has to play every time.  Paid stream services may not have any adds played with it at all.  The tracker just brings real accountability to the players. Currently the media goes out naked, and the content creators must simply trust that they are being paid properly and that the tracking and accounting is done correctly.  There are several agencies, ASCAP, BMI and SESAC who are independents, supposedly representing the artists and ensuring everyone out there who is earning from their music is paying up the pennies to do so.

An interesting problem occurs with this model.  What about the DJ, mix tape, remix, or mashup of a song(any time a song is modified from the original).  This would obviously take the song out of the capsule to whatever degree, making it a new work.  YES, it absolutely becomes a new work, and instead of foolishly trying to regulate and stifle such things, my model would embrace it and encourage it.  When a remixer makes a track he or she immediately does one thing.  They put their name or stamp on it.  It is their original work, on top of the original.  Currently the music industry feels that they are just taking from and using that original work for their own benefit, but in reality it’s a two part contribution that they need to start encouraging, and I mean down to sharing some of the dividends.

The system needs to make it quick and easy for modifiers to identify and log their remixes with the system.  The database would be able to report in real time how many versions were out there of the original, and how they all were individually fairing. The simple mind looks at this and is concerned with the immediate loss of a group of their plays and revenue becoming this “modified” variety, and the temptation is to say, “if I have to share a small percentage with these other people, then I lose money.”  The reality is, a large portion of those plays currently offer you them no revenue, because they are underground and not tracked.  Additionally, the fact is, a portion of those plays are to the credit of the modifier, who has a big enough group listening, to amount to some measure of plays.

Each time a new song comes out,  there would be a immediate remix contest.  This only adds to the overall excitement for a new release anyway.  Modifiers would have a direct connection to the artist, which would only provide another opportunity for networking of talent and possible collaboration.

Those entities, currently claiming to represent artists and gather up their funds, could find a new line of work.  Human beings running around tracking who is playing what song.  Does anyone think that is a legitimate product in today’s environment?  With tracking built into the song, along with add revenue preferences, and music recognition software that’s now standard fare in most cell phones out there, the computers can give not only a better, accurate number, but do it more efficiently.  That means exponentially more data, being handled more efficiently and at less expense than the current structures out there.  That means more money to the content creators.

On the news story or video end, the capsule model’s value is even more apparent.  The ad in it’s entirety is built right in.  And don’t forget to think of this capsule as a living, breathing thing.  What do I mean?  Well just like with the updating of software on your computer, these works will be stored locally as well as in the cloud.  The system used to play and store the works will also be set up to update the files with the current ads.  If the ad is only viable for a short period, or if it expires, it is automatically updated and the players do not concern themselves with anything inside of “the capsule.”

Lastly, the thought does come into play, “well isn’t it easier for the players to just play the ads they want, and not worry about updating a library of content?”  I say no.  We make the updating process automated, so there is no work on their part in the ad mechanism.  They simply play the song.  The software built into the system handles all the ad decisions, tracking, and accountability.  This puts the power back into the hands of the content creators, in a fair and sustainable way.  This is a win.