Black Mirror Project: USB-C docking station proposal


Yes, that is a simplified form for understand it. But is only applicable to the Dock+Central Tablet setup, for the Dock + Laptops or even Dock + smartphones setups is not quite rigth. The first one we are looking for a unified display surface with the array of cameras on a certain angles and locations setup for scan quality and scene reconstruction

But for the seconds other will follow, with the same base for processing, sync and display, a evolutionary Multi Display Experience like we have now on digital or streaming TV broadcasting but in a coherent 3D shared virtual Space for coworking, play, or collaborate just like the VR Apps with remote Avatars like AltSpaceVR. or the next JanusVR streaming service.


Okay so in the first/basic iteration its a

Which I take to mean something that resembles a monitor with tracking cameras.

The second more advanced iteration would basically be a mobile version of the first (a tablet or laptop or maybe smartphone) but with clustering capabilities and a base dock for all of them to dock into?

Am I correct so far?

Also please define edge computing because I feel it is key to your proposal.


I didn’t say anything about your company and frankly I couldn’t care less about it. I just said that Eve is a company and you said you wanted to make a dock “for the Eve”. I wanted to correct you, because first of all, “the” is not used with company names, and second, you probably wanted to say “for the V” because you generally make accessories for a specific product, not for a company. I don’t even know how a dock for a company would look like… Would it connect to the office building?


Edge computing is a method of optimising cloud computing systems by performing data processing at the edge of the network, near the source of the data. This reduces the communications bandwidth needed between sensors and the central datacenter by performing analytics and knowledge generation at or near the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.

Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditionally telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services.

From the point of view of Computing Resources, on the current generation tablets we found with an average of 6 cpu cores per SoC, take for example the Qualcomm 835 or the Google’s iniciative for OP1 processors like on the Rockchip OP1 in Samsung Chromebook Plus. With a minimum of three (3) Tablets per setup plus the 6 cpu cores on the Docking Station we can achieve a 24 cpu core count per setup, without even taking account of the GPUs.

We are taking the road on Cyber Physical Systems, just like our parent base code project, Axiom Project:


:rofl::joy::sweat_smile: Yes, you are right, now that sound strange even by me after you explain it like that, but thanks for correct me. I will need to polish even more my plain english and vocabulary as well. NO, we’re not going to hook it up a Black Mirror on every building :rofl:, neither EVE.

Our docking solution will be compatible “for the V” and others devices as well.


Okay I’ve done a bit of research and as far as I can tell edge computing is just an umbrella marketing term just like the ‘cloud’. Its just an catch all term for all computing that is network based where the computation occurs on the node (edge) rather than a centralized server/datacentre (as with the cloud). This isn’t anything new or special (neither is the ‘cloud’). Both centralized and decentralized computing have both existed for decades. The only difference now is a lot more crap can do ‘computation’ and connect to a network. Namely the plethora of IoT(another stupid marketing term, iot devices have existed for decades) crap.

Anyway going back to your point. You propose creating a computing cluster/mesh/ grid that dynamically pools/diverts hardware resources for client end software requirements. Which OS do you propose will do this, the only operating system that I know that is capable of this is Azure fabric OS, which took 100’s of millions of dollars and decades to develop. Then which apps do you propose will properly scale on such an operating system. As far as I know Such apps do not exist, not even on azure. The only thing that scales dynamically is infrastructure layer (azure fabric layer built by Microsoft) and the service layer (azure services like databases and virtual machines etc). Most client apps can’t even use all CPU cores and rarely can utilized multiple GPU’s properly. How do propose app will run across multiple devices with separate CPUs, ram pools, rom pools, GPU’s. Surely you would have to rethink how client apps work from the ground up?.


Thanks @Attiq for take the time to do that research, is nice to have here people on the same wave, it’s clear that you have done the homework.

Is awful that I need to use those marketing terms to make the history short, but as you may read on my details from the Hackaday profile for the project, we are against all those corporative strategic decisions for decap technology progress for the rest of the people that would benefit as end users or technologic SMEs. Just like here on the EVE Community,

Is not a secret that a Hololens kit is not cheap and easy to join on the partner developer program, either all the top VR hardware with full features like the Oculus DK + Constellation and the HTC Vive + Lighthouse, all that bulky bill with even buy a VR ready Desktop PC.
Just like the Pyramid Flipper concept you have here, we are looking for make those technologies more convenient and cheaper for mass adoption and open for suggestions by the future user base.

One person this days tells me if he would be able to use our technology on his Samsung Gear VR, and then you realize that these kind of device would be the most affordable and mass adopted for VR Content but is ALWAYS BE AN INDIVIDUAL EXPERIENCE with a screen blocking your visual of the surrounding. Some solutions were developed for that problem like adding some cameras to the HMD for InsideOut Tracking and the VR app will aware of the obstacles and dimensions of the room like on the Intel’s Project Alloy.

Like you say more of the current hype for these “new” technologies, first those tech are not even new, some older than others, even the main concept of HMD for Virtual Reality Itself is older than computer graphics. As you say those technologies have a long way maturing and millions invested over time but is not that the progress? As the companies invest in their products and technologies they need to retain the ROI as long their patents let them but another acceleration of adoption of new technologies are the Standardization Efforts along the industries.

And that my friend, Open Standars, is the key of the future of interoperability and integrations of technologies. We are building our software and hardware stack over open standards that guarantees future proof operability and compatibility with a diverse ecosystem of devices. With all the work we are getting close to test at Uruguay, we are following the certification path for become OSVR industrial partner and our hardware will be on the list on a total different group aside HMDs.

This strategic of getting close to standards will follow from the top most abstraction level like OSVR is and down to the physical level with connectors (USB-C, USB 3.1 gen2, PowerDelivery 2.0, etc.) and SystemOnModules like the SMARC 2.0 standard for the hardware side. For the software side we are solving that issue you pointed out about parallelism and multi-core sharing with a new Asynchronous Heterogeneous Parallel Programming Model called OmpSs that will work on Intel x86 and ARM architectures with support of CUDA for Nvidia GPUs and OpenCL Mali GPUs.

The OpenCL part is crucial because is a Software Development Industry Standard for offloading most of the GPGPU task that would benefit from the massive parallelism on GPU for those novel algorithms like Machine Learning and its cousins Deep Neural Networks that will accelerate all the fancy stuff like object/face/voice/language/body/hand/gestures recognition, for that we have the “Khronos royalty-free, open standards for 3D graphics, Virtual and Augmented Reality, Parallel Computing, Neural Networks, and Vision Processing” that include the well know OpenGL for graphics and the WebGL counterpart for the Web where is the base for the WebVR specification and the glue for all that with the AR/VR Hardware is OpenXR.

For the Operative System we are merging a base Chromium OS with optimizations with WebCL and direct Vulkan implementation with Daydream specifications of latency and hardware integration plus ARCore libraries for the rest of the interactions with local augmenting layers. This operative system will be present on the BYOD HUB Docking Station and the Portrait Tablet Like Device. for the rest of integrations with the others Operative Systems from the devices connected on our BYOD HUB will be an application ported for each OS with sufficient permissions to manage network, memory and cpu on demand like a virtual machine, where each resource from each device will be present on each other device through an API served from the Docking Station. Just like an local and private microcloud with accelerators.


Pogo pins use USB 2.0 protocol, not sure what you want to do with that bandwidth.

And then I’m not sure what you want to do with the huge bezels in the middle of your “mirror”.

Uh, what does this all mean? Illustrations maybe? I see it’s hard for you to explain it in English, but please take this friendly advice: when you fail to explain something, don’t put more words in the sentence. Put less. Actually, as few as possible words. That way everyone will understand you.


Yeah but I still don’t see how your creating a computing mesh.

If your using OpenGL for ‘offloading’ tasks, this A isn’t new B isn’t mesh computing because is isn’t dynamic. This isn’t a fault tolerant solution (I’m somewhat familiar with the idea of using render farm to offload rendering tasks but that isn’t anything like mesh computing because there is a master device(pc) and a slave device(render farm) that is dedicated to only one task.

So every app would have to be rewritten to make use of your system?

Also why is the VR thing a good idea? I’m just not seeing it? Surely a Gear VR headset produces a more immersive experience and is significantly cheaper.


Pogo Pins is used only for synchronize and messages passing purposes, not for high bandwidth data transfer, just like the Nvidia Quadro Sync Cards for VideoWalls to align multiple video sources and the timing of the frames rendered.

That’s because for the three tablets configuration we are designing the central portrait tablet ourself with a narrow bezel on both lateral sides were the other two tablets bezels place just behind the edge of the central display and align with top and down bezels.

For the proper illustrations I have doing the last details for the infographic brochures for quick understanding about system architecture, industrial design and User Interface.


OK you are getting close to the real concept, is not a proper mesh topology from the book, that only apply on the network layer of message passing between the nodes for coordinate the migration and balancing of the working threads, another use we are using for that layer is broadcasting the synchronisation signal for perfect timing of the rendered frames.

But at the layer of high speed links (either USB 3.1 gen2 or Thunderbolt 3) is a conventional Star topology with the docking station acting as Master Node, that will be on the “3 in 1” and “6 in 1” configuration for the more affordable 4 USB-C ports model, where the spare USB-C port would be used for attach a Hard drive or expand the network with another docking station.

For more than 6 Displays configuration the user would need a 8 USB-c ports model version of the docking station or just plugging another docking station in series up to 4 docking stations total in a spawned Tree topology for the high speed links.

Nop, we have to develop an software stack with drivers for virtual (audio, video, ethernet, etc) ports over USB, just like the HID standard does. For those applications that already have render-farm or CUDA offloading capabilities would be transparent, just like plug an eGPU and change the setting for the primary render graphic device. The more integrated OpenCL and OpenGL part would be needed only for the Unified Display Surface Feature (Tablet’s VideoWall) or the Black MIrror Setup, for this our solution with OpenSceneGraph will be a Master / Slave render nodes approach over the already established High Speed and Spawned tree topology and the low speed pogo pins mesh network explained before.


Yeah but you have separate CPUs and ram pools and client OS’s in each tablet. I don’t see how you can ‘borrow’ resources from another tablets CPU or even ram for that matter. A GPU is fairly straightforward because it uses PCI-e but even then there are significant overhead costs which essentially increase exponentially as you add GPU’s. I’m not a software or hardware engineer by any stretch of the imagination but even I can tell you that even if this where possible the overheads alone would make it pointless.

I’m sorry but I cannot see it working. A CPU cycles several thousand million times per second and has an extremely high bandwidth connection to RAM, there is virtually no lag time. This makes ‘live’ end user applications feel instantaneous. The moment you input a tiny bit of lag into that system your just wasting thousands if not millions of CPU cycles waiting for data to bounce back and forth between devices over the network, be loaded into ram, move to cache and then be executed.

If you did this with a top of the line Intel processor you get less performance than a single processor, if threads where distributed uniformly, no matter how many nodes you added, because each node you added would just increase the lag time.

If you distributed threads non uniformly (had a master node) you would get the same performance as a single processor because your master node will be doing %95+ of the work, if that is the case what is the need for the others?

The network is a significant bottleneck even if you used direct PCI-E bridging between CPU and ram pools.

Such a system would only work relatively low bandwidth components which are dedicated to specific tasks such as hard drive or even GPU. But then its just an egpu over thunderbolt or HDD over USB which is what some of us are already proposing for Donald Dock.


The overheads you pointed out are real but not a real issues these days, maybe 2-3 years wayback in the past, note that these interconnects would be present on the marketing and final user paper but is not that simple on the software and hardware engineer level. Take note that our primary Chip is a FPGA based controller on the Docking Station and that means that we would able to reprogram the circuits itself, the custom fabrics of the links between the IP Cores and the other devices as well without operative system middleware overhead and no back and forth to be loaded on RAM’s devices. All this on high frequency transceivers (at least 12 GigaHertz speed each) tuned for each scenario from USB 3.1 gen 2 Bus, Thunderbolt 3 or specific bus transfers as we needed.

One important developing is occurring at the Linaro Organization with some standardization around how those interconnects works, were they have at the Enterprise level Group systems with 24 to 48 ARM cores per SOCs and they want a more coherent and efficient way to communicate those heterogenous architectures.

Our OmpSs base programming model will be specific for within our own branded devices (docking station and Portrait Tablet like Device) and others compatibles devices by our app/vm installation. Other uses for only the docking station would be like a BYOD HUB transparent of the Operative systems used on the connected Devices were all the packet based services were exposed like network resources like would be a NAS device or a UPnP for video sharing on the transport layer but USB-C signaling on the physical layer.


Sorry, I don’t understand what you wanted to say here… Do you mean you’re still making illustrations and I should wait for them?


I don’t get it. What do you want to sell? A bezel-less tablet with pogo pins at each side to connect other tablets to a big screen? And how should I use it? And for what? Who is buying 6 V’s (in case of former m3 price it will be round about 4800$ and the center device and a dock). And what will it do better than one V and a big screen (maybe curved…)?


The real reason I made public some of the advance of our project here is that we share the same goal to go on the public with the product design phases and to bring some of the edge technologies in a more affordable way to more people. Like @canonlp say that our product maybe be very niche is not that bad in terms of new vertical markets like coworking spaces, universities, public stations, libraries or digital nomads that need brings their infrastructure with them where you have people that bring their own device or just rent it for a while.

I need to bring some light here, I see some confusion around the possibilities and features of this ecosystem bring. First, is a modular system, where the docking station would work independently from the other parts. Second, we are implementing all the USB-C specification with the dock and will work with a hard drive, a DisplayPort 1.2+ Monitor (even curved) or what ever adapter dongle you need to use within the USB-C AltMode spec.

If the dock would used with this Central tablet, you will be having a device like an Amazon Echo Show with a bigger display and more realistic experience with a 3D display without glasses, hand gesture interaction and 3D scanning right on the device like a Project Tango Tablet o Smartphone does, etc… all the features I explained before on a device for the price that 5 years ago you will pay half dozen of thousands of dollars. I refer to the CAVE systems ( Cave Automatic Virtual Environment)

Another product ready out the market you would know that came close to our concept is the HP Sprout Pro, an AIO Desktop that make use of a micro projector, sensitive surfaces and array of cameras to accomplish some of the features without the 3D display.

But in the other hand you have some advantage on one AIO with 3D displays, but with passive glasses, that’s the case of the Zspace Hardware, that was used with a extra Depth Camera on the HoloSuite paper where all our adventure on telepresence technology began.


And 3D display, hand gesture interaction, 3D scanning and that all is developed by your company? Or do you have patents for that or rights to implement it from other companies?


Not sure if you already know it, but Eve V does not have a 3D screen.


Don’t be upset because of our critics, please. We just want to point out that eve is really building a product. Just in this moment. And the dock should be made maybe next year. Not in 2050 when 3D Display 3D scanning and hand gesture interaction is reality (maybe).