Black Mirror Project: USB-C docking station proposal


#1

Hi BeliEVErs! We think that we can collaborate on this particular phase of the EVE project with some key improvements on the Docking Station that fullfill that 3 point goals that Xinjie talk yesterday:

To those who don’t know what I’m talking about just take a peak on my presentation to the community and get close to know me and make all the questions about our project here or there.

So this year for the “Anything Goes” contest for the Hackaday Prize 2017 we are releasing a full Open Source Hardware and Software Reference Design BOM and Schematics for building a initial prototype of a modular system in a DIY KIT to do the scanning, rendering and Streaming of the scenes and peoples in front of the device.

As we go through the design process about selecting the right system architecture to bring this device possible we realize that many of the devices we already have (around 3,3 computing devices per person: Laptop + Tablet + Smartphone + smartwatch?..) the computing and display capability to bring the experience of augmented reality aspects of the future’s life style next from the smartphone era.

For bring that computing power together we engineered simulations and the only open sourced hardware, “off the shelf” and affordable to build a proof of concept setup, that will have the enough processing power to do those heavy lifting was a combination of 4 Udoo X86 boards (or the coming next moth MInnowboard 3) connected with Gigabit Ethernet links to a Snickerdoodle Black FPGA development board (that even haven’t full released yet after 2 years) to control and sync all the displays and cameras to fullfill the Hackaday’s contest ethics for licensing the works.

But as we plan our transition from design phase to validation before doing any prototyping some news, around end of June this year, about the Snickerdoodle project was delaying us to work on actual final hardware because their decisions around their GiggleBits dual gigabit ethernet accesory boards to not fabricate those any more soon and the changes they made to the PiSmascher baseboard to go only two onboard ethernet ports and all USB ports instead of leaving the two FPGA IO Ports for the two GiggleBits boards and leaving the PiSmasher onboard Ethernet port for the “WAN” connection.

We learn the lesson to not relay on another crowd-sourced project with lengthy release dates, as November 2017, because we don’t want to tight our project’s schedule in a supply chain nightmare just to full-fill the Hackaday Prize Best Product Contest requirement of a video with a working prototype for final submission date on October 21.

So we scratch the white board again and replants all the system architecture with our plan B strategy with no PiSmasher Base board at all and we got that our main software parallel processing framework contributor Axiom Project has arrived their first production run of their Axiom Board with 4 USB-C ports connected to high speed transceivers on the FPGA side of the SOC. We understand that we need to design a USB-C docking station HUB with those kind of SOCs and implement an cross compiling journey of make an app for every OS for every device that hook up to the docking station with our OmpSs library implementation and other stuff like our OpenSceneGraph based distributed rendering cluster thing working to render the future interface of our Interactive Telepresence Service.

The main Idea behind is to make a BYOD Hub for prosumers and solve their NETWORK HIGH BANDWIDTH MANAGEMENT problem. Our solution brings lights with the inconvenience logistics and/or configuration nightmare in a BYOD scenario for the areas of scientific research, engineering simulations and multimedia/transmedia professionals that relay on high bandwidth network transfers of Data/footage for being stored/recorded/edited/transcoded/rendered locally or even streamed over the internet.

Being in the age of millennials that travel a long the world with freelance work where the meetings are on mostly co-working spaces and no natural office establishment, where the “digital office/studio/FilmSet” travels along the entrepreneurs of this new digital startups revolution, most of them cant bring the network infrastructure with them, just because for security and convenience of its high bandwidth needs, on a portable collaborative setup without bottlenecks that simplify all the tangle of cables, transformers and installations for each location they arrive.

After study that market needs we came with a solution in a form of a PORTABLE and MODULAR BYOD hub with high bandwidth links (USB 3.1 gen 2 @10Gbps / Thunderbolt 3 @ 40Gbps), with Power Delivery 2.0 charging, an DualRole Alt mode with up to 10 Gigabit Ethernet and DisplayPort/HDMI/SDI Video Signalling over the same unique cable per device ( Smartphones, App-based tablet devices, Professional Workstation Laptop client computers, Convertible Tablet PCs with pens “WINK eVe WINK”, NAS devices, External Graphics Rendering devices, etc…) applying some Edge Computing concepts to the mix where each device can contribute to the compute/storage pool.

That’s because our R&D team with experts on visual arts, human machine interfaces engineers, augmented / virtual reality developers and hardware hackers came fruition with a portable and modular system with state of the art Low Power all programmable heterogeneous multi-processing SoC (Zynq UltraScale+ EV MPSoC) that brings us the ability of control and sync high amount of data thanks of its High Speed Transceivers on the FPGA side of the SoC.

With a quad-core ARM® Cortex-A53 platform running up to 1.5GHz. Combined with dual-core Cortex-R5 real-time processors, a Mali-400 MP2 graphics processing unit and an integrated H.264 / H.265 video codec capable of simultaneous encode and decode up to 4Kx2K (60fps) we are capable of control with our firmware and software solution all the networking, charging and video signalling needs of that use cases scenarios with a sufficient abstraction and hardware architecture agnostic approach independent of the operative systems of the devices.

Like I said before we are in “the areas of scientific research, engineering simulations and multimedia/transmedia professionals that relay on high bandwidth network transfers of Data/footage for being stored/recorded/edited/trans-coded/rendered locally or even streamed over the internet to the cloud” even we have one case study on Industry 4.0 scenarios were Machine Vision algorithms were applied on some high speed camera array for object recognition an counting of millions of parts or quality assurence of the assembled parts. Our Software stack is capable of bring the Machine Learnig and Deep Neural Networks algoritms locally on our Device and distribute the parallel processing load on the compute/render nodes conected to the high speed links.

We are on conversations right now with the Cambrionix folks that made some cool OEM stack-able modules with 4 USB-C ports up to 16 USB-C “oneuniquecable” worth of power delivery, high speed data transfer and video signalling with a web based monitoring and management of every aspect of each port: https://cambrionix.com/products/powersync4-usb-type-c-power-delivery-oems/

For the SoC approach we have decided to go with the main manufacturer of the Axiom Board: www.seco.com and build a baseboard for their SMARC 2.0 MODULE when they figured out what are happening with Congatec’s SMARC 2.0 Apollo Lake modules implementation of “USB 3.1 gen 2” USB-C ready specifications.

For the industrial design approach we have decided to go on the road of a “tablet mini pc” form factor like in the Ockel Sirius A: https://www.indiegogo.com/projects/ockel-sirius-a-the-world-s-most-versatile-mini-pc-mobile-design--2#/ with just not in a 6 inch size scale but in a maybe 10" size with a couple of Cambrionix PowerSync4 modules.

That’s all the state of the things right now. The objective of our project is to generate a Hardware Development Kit, in the format of an folding and portable all-in-one PC/RenderFarm/PowerWall/LightStageScanner/micro CAVE system economical and flexible enough to be acquired by institutions, universities or groups of people in general within the Hacker / Maker culture or Prosumers who want to experiment with these technologies.

Our Open Source Software Development is based on OpenSceneGraph with Equalizer over a Chromium OS base and we are optimizing the system to release the code as soon as we can do our tests with the final hardware.

Soon we traveled to Montevideo Uruguay for a few months to follow the following stages of Hardware Development with the following prototypes with the people of SinergiaTech, which are the first and only hardware accelerator in Latin America and are interested in developing everything with us all the way with this venture in a Ultra Fast Company Builder Track, where we can have the opportunity to collaborate and work together with Alvaro Cassinelli who has been our mentor from the beginning.

If all goes well, at the end of the year we will be starting a Crowdfunding campaign at Crowdsupply to start making the kits on a larger scale as we are also competing on the last stages of the HackaDay Prize 2017.

Hoping to be able to collaborate with us in this next exciting year for the technologies of Augmented Reality and Mixed Realities.

Maximino Reyes
CEO, CTO.
Transversal Dimensions .LLC
+584127202601
@BetaMax @Telegram


[Poll] How about make a Eve brand docking station/hub?
#2

Wow, that’s a long post… Sorry, but… I’m not gonna bother to read it all…Could you please extract the main points? I’m sure most people here will just come, see how long the text is and say “meh, maybe later…” so it would be awesome to have the idea outlined at the front of the post to get the general idea of what it means. Then, interested people could read the rest of your post if they want :slight_smile:


#3

I’m working on a 5 seconds understanding infographic poster for the weekend with the rest of the documentation for the Hackaday project profile, but you are right, now I see how long come up the post later. Just like @Attiq i’m working on about two years on this idea and the Docking Station idea for the EVE just blow my mind.


#4

I’m totally with pauli. I did see that text and start reading. But after the first (very long) sentences with lots of technical words I don’t know (no native speaker) I stopped and thought: “maybe later”. If possible please try to cut down your sentences and try to use simple words because lots of community members are neither native speakers nor technical engineers.


#5

Okay I just spent the last 40 mins or so reading about the concept and heres what I have gleaned so far (BTW you need to try to explain things in layman’s terms), let me know if I’m wrong.

It’s a desk based VR ‘window’,theoretically it works much like VR head mounted displays but itsnt head mounted but sits on a desk much like a monitor (it used six displays which aren’t 3d capable from what I can tell, which is weird, also I don’t get why there are 6 small displays instead of one big one)

The setup has a plethora of eye/face tracking cameras that detect physical presence and alter the onscreen content accordingly (again I don’t get why there aren’t 3d displays). In effect the multiple screens acts as a portal/window/viewport/mirror dynamically adjusting as the user moves.

The basic premise is that this is a better alternative to HMD’s, not sure why.

It’s very similar to a Microsoft magic Window project that Stevie Bathiche has been working on for years http://www.independent.co.uk/life-style/gadgets-and-tech/features/the-future-according-to-microsoft-the-magic-window-revolutionizes-video-chat-8761310.html

The main problem with this idea is it only makes sense if the display is huge (a small display would give a tiny field of view and a large wall sized display would create the same field of view as a HMD). Not to mention that it would be a poor experience with current display tech and and eye wateringly expensive experience with the displays Microsoft is using (waves guides/light fields and so on) .

So now that I have a rough idea of What it is ( I think), can you please explain is layman’s terms why it is? What’s the purpose of such a device. Why is it any better than a nice monitor?


#6

Ok, as a techy that goes quite deep into this stuff, I agree with @Attiq that it’s pretty unclear WHAT you want to do.

You write about the story, but it never becomes clear WHAT it is that ‘black mirror’ should be or become. I’d propose you make it a bit clearer, and structurize it the following way:

WHAT is Black Mirror? (2-3 sentences, KISS principle (Keep it simple, stupid))
WHY is it good? (Maybe a bit more, 4-5 sentences most - again KISS)
Technical stuff (Here you can go a bit in your technical stuff however don’t just throw specs at ppl - if I’m not involved in your project I will likely not care that you connected 4 board via USB-C to a FPGA - again KISS.)


#7

We like and we all need the input of creative minds in this community.
We surely want and need to understand what creative minds are thinking / working on :thinking:
So: please help our less creative minds to understand it all :relaxed:
Feel welcome, because you are!


#8

Let me remind you: Eve is the company. You’re not making a dock for the company, you’re making one for their tablet…


#9

You are right, I’m not a english native speaker that’s because the sentences and paragraphs can be “trans-literally” longer than in plain and non technical words. I will go editing with your suggestions the main post to match the audience and clarify the objective of the project right in the beginning. Maybe putting some images will help a lot. Thanks for being supportive.


#10

Great to have you here. Thanks for your understanding!!!


#11

Wow, so I started reading, re-reading, ended up skimming the OP and felt really dumb and old and out of my depth technically.

Then I read the other posts and realized with relief that I was not alone :slight_smile:

Thanks to @Attiq as ever for the translation, look forward to seeing how this pans out!


#12

Maybe I was confused about the whole propose of this community, I’m just looking around the forums since September and not even read about half a dozen post, many of those on the themes that pop to my eyes and cross with my interests but no conscientiously making an idea of the whole social movement behind, like the “pyramid flipper” concept I came cross at some point, or being so open to an audience so diverse in terms of technical levels and User Experiences or Use Cases.

We have a company behind, you are right, but just for the liability and legal issues for being an spin off from past research findings and a open source business model like Aleph Objects, Inc. from the LulzBot 3D printer. Our main goal is to make an ecosystem of modular devices and software stack that bring the telepresence concepts across the line the virtual and real world, that’s because we choose the name “Transversal Dimensions”.

The “dock” unit is just a part of a system architecture that will work with any USB-C from USB-IF specification, not just the V tablet from EVE the company. We have a couple of prototypes on our own 3D Light Field display technologies that will be present on our next “tablet” device but not on this phase of the project. I personally think that you are the best match for a community around we can walk along and validate some of our common issues and benefit each other on that road.


#13

Hmmm I’m not sure what to make of this.

I applaud your enthusiasm and your passion behind your product. I can’t help but feel that: A you are using Eves forum to advance your own company/products, which I guess is inevitable but uncalled for. B you are using this community to advertise your own product/technologies, which has happened before but which we can’t abide.

If you feel like you have a business/collaboration proposal please make it to the @team in private I’m sure they like to hear you out.

This forum is a community of enthusiasts who want to help Eve make cool device and change the status quo. We would of course appreciate all input in that endeavour.

The concept you posted I will assume you mean for Eve to replicate such a setup in their donald dock project. So concerning the back mirror concept. What is it? And why should we consider including the setup in donald dock.


#14

Think about the Razer Project Valeire approach

were each display have their own processor, storage, memory, cameras and high speed connector that maybe separate will be a simple tablet on their own but linked in a “distributed processing cluster”. The whole thing came become a portable 3 sided vanity smart mirror where the interface is with gestures like on a “minority report” user interface.

The 6 display configuration will be the initial top setup because is the limit in terms of Thunderbolt 3 bandwidth per docking station. If the client will need a bigger display just use another type of display or just expand the network with another docking station in series.

After you write about “6” V tablets devices i would like to correct myself, there will be just 4 “V” tablet’s like devices on a 6 displays configuration because the two central column displays will be like a 12.3" DELL Venue portrait tablet like device.

and will have pogo pings connectors on each side to connect to the “V” tablet like device like you would connect with your current “v” keyboard in a slanted manner that the angle between each display will just underlay behind the ultra narrow bezel of the central display like on the Razer Project Valerie.


#15

Thanks for explaining it simply :slight_smile:

So essentially it’s a multiple tablet cluster?

What is the advantage of this over a standard monitor setup or even the Valerie concept.l?

And how does augmented reality figure in?

Also won’t this be immensely expensive.?


#16

For the final display configuration yes, it will be like a multiple tablet cluster for the Black Mirror Project concept alone, but our initial proposal for just this “smart” docking Station alone will be of a BYOD High Speed Hub.

Were a group of people can bring with them their high speed wired network infrastructure with them, and because this group of co-workers will have already their own Convertible Tablets devices that will summ up to the “cluster” configuration. Or just no snapping side by side together any device and work with their USB-C / Thunderbolt 3 Laptop on the same table with a central point of resource sharing “router” with attachable storage and common big display monitor or LCD TV.

This is another example about the “same conference table with multiple devices connected” will be the SlidenJoy approach of multiple screens for share only VISUAL information with no context or interactivity with it.

On the interface side of things, along your question about how does Augmented Reality figure in, we are on the road with our team’s AR/VR VideoGames Developer for take the interface of the portal to immerse the coworkers in a 3D sharing space, were each display from all tablet, laptop, smartphone (or our portrait Light field tablet device) connected to our dock will be a Window to that shared space. Or will be possible to connect to other docks on another place connected to our cloud service.

Some people have advanced on interface proposals like the Eco Android App from LeafLabs ( the same people of Google Project Ara Modular phones) for the ready to market Google Jamboard:

https://www.leaflabs.com/eco

http://www.sharingspaces.io/


#17

No offense, while I think proposal is interesting, it is VERY niche. I don’t think it’s worthwhile for a company to spend so much time in R&D to cater to a few people. That’s my take on it. But I’m still confused as to whether you’re asking Eve to build this “dock” or are you building it yourselves.


#18

Like I was telling to pauli before, for the time I did my first post I don’t even aware that EVE the company was a separate thing from the community, formed of official Backers of the initial V tablet campaign. Maybe the “creators” badge confused me and now I know that you are only the “suggesters” or the future user base of the device, with validating acting part for EVE the company, and only EVE the company will have the final decision on all the details. Correct me if I’m getting wrong the whole thing. I think is getting more clear to me with this statement:

As my self being long involved on Linux and open source communities with a more horizontal organization and and more open minded decision making methodologies maybe is not the same case here anyway. From now on I will be reading more on other topics of the forums to fully understand the mechanics and social dynamics in here.

That’s because the tittle say it is a proposal, not an enquiry to have an acting role on decision making if you will, just another way on the broad map, a kind of friendly suggestions since we have had the experience as systems integrators with much more complex architectures than a simple dock, so we already have the jumps and falls with obstacles until we understand the end user market of a system very similar to what they are looking for with The Donald Dock.


#19

We are building ourself the dock, or maybe our “niche” dock as you say. That’s because we are travelling to Montevideo, Uruguay. We have negotiations with the SinergiaTech Hardware Accelerator with an initial 6 month residency with seed funding for build both final prototypes: the BYOD HUB or “niche” dock and the central portrait tablet like device (that work together like a giant Echo Show sort of device).

Apart from being open with the EVE company and the V community, not forget that our project goal is to publicly develop a OPEN SOURCE HARDWARE REFERENCE DESIGN for such of docking stations with Edge computing capabilities. And the EVE company will be part of the specification design process or just take what it like from the whole concept design.

About the niche approach you will be misleading the current reports for next technology trends, for the next two years alone, for hardware startups being pushed to NICHE markets and not mass produced consumer products, and for 2018, Edge Computing will be the next main technology that brings IoT, 5G, Blockchain, Deep Learning, etc in a solid pace for real products an services with more mature business plans.


#20

Okay thanks for the clarification, I think were on the same page now :slight_smile: yes Eve and the community are ‘separate’ so to speak but Eve and the community are inseparable in so far as product design ‘suggesting’ goes.

Everyone here is very friendly and open minded, I would just say to use normal casual English and not technical English, dumb it down for us :slight_smile:. We would love to hear your suggestions for future Eve products, thats why this community exists.

Can you please clarify what you are proposing. Is it a tablet cluster with augmented reality capabilities?

What do you mean by edge computing? Is this your method of creating clustered using the physical edge of say a tablet?