Hi BeliEVErs! We think that we can collaborate on this particular phase of the EVE project with some key improvements on the Docking Station that fullfill that 3 point goals that Xinjie talk yesterday:
To those who don’t know what I’m talking about just take a peak on my presentation to the community and get close to know me and make all the questions about our project here or there.
So this year for the “Anything Goes” contest for the Hackaday Prize 2017 we are releasing a full Open Source Hardware and Software Reference Design BOM and Schematics for building a initial prototype of a modular system in a DIY KIT to do the scanning, rendering and Streaming of the scenes and peoples in front of the device.
As we go through the design process about selecting the right system architecture to bring this device possible we realize that many of the devices we already have (around 3,3 computing devices per person: Laptop + Tablet + Smartphone + smartwatch?..) the computing and display capability to bring the experience of augmented reality aspects of the future’s life style next from the smartphone era.
For bring that computing power together we engineered simulations and the only open sourced hardware, “off the shelf” and affordable to build a proof of concept setup, that will have the enough processing power to do those heavy lifting was a combination of 4 Udoo X86 boards (or the coming next moth MInnowboard 3) connected with Gigabit Ethernet links to a Snickerdoodle Black FPGA development board (that even haven’t full released yet after 2 years) to control and sync all the displays and cameras to fullfill the Hackaday’s contest ethics for licensing the works.
But as we plan our transition from design phase to validation before doing any prototyping some news, around end of June this year, about the Snickerdoodle project was delaying us to work on actual final hardware because their decisions around their GiggleBits dual gigabit ethernet accesory boards to not fabricate those any more soon and the changes they made to the PiSmascher baseboard to go only two onboard ethernet ports and all USB ports instead of leaving the two FPGA IO Ports for the two GiggleBits boards and leaving the PiSmasher onboard Ethernet port for the “WAN” connection.
We learn the lesson to not relay on another crowd-sourced project with lengthy release dates, as November 2017, because we don’t want to tight our project’s schedule in a supply chain nightmare just to full-fill the Hackaday Prize Best Product Contest requirement of a video with a working prototype for final submission date on October 21.
So we scratch the white board again and replants all the system architecture with our plan B strategy with no PiSmasher Base board at all and we got that our main software parallel processing framework contributor Axiom Project has arrived their first production run of their Axiom Board with 4 USB-C ports connected to high speed transceivers on the FPGA side of the SOC. We understand that we need to design a USB-C docking station HUB with those kind of SOCs and implement an cross compiling journey of make an app for every OS for every device that hook up to the docking station with our OmpSs library implementation and other stuff like our OpenSceneGraph based distributed rendering cluster thing working to render the future interface of our Interactive Telepresence Service.
The main Idea behind is to make a BYOD Hub for prosumers and solve their NETWORK HIGH BANDWIDTH MANAGEMENT problem. Our solution brings lights with the inconvenience logistics and/or configuration nightmare in a BYOD scenario for the areas of scientific research, engineering simulations and multimedia/transmedia professionals that relay on high bandwidth network transfers of Data/footage for being stored/recorded/edited/transcoded/rendered locally or even streamed over the internet.
Being in the age of millennials that travel a long the world with freelance work where the meetings are on mostly co-working spaces and no natural office establishment, where the “digital office/studio/FilmSet” travels along the entrepreneurs of this new digital startups revolution, most of them cant bring the network infrastructure with them, just because for security and convenience of its high bandwidth needs, on a portable collaborative setup without bottlenecks that simplify all the tangle of cables, transformers and installations for each location they arrive.
After study that market needs we came with a solution in a form of a PORTABLE and MODULAR BYOD hub with high bandwidth links (USB 3.1 gen 2 @10Gbps / Thunderbolt 3 @ 40Gbps), with Power Delivery 2.0 charging, an DualRole Alt mode with up to 10 Gigabit Ethernet and DisplayPort/HDMI/SDI Video Signalling over the same unique cable per device ( Smartphones, App-based tablet devices, Professional Workstation Laptop client computers, Convertible Tablet PCs with pens “WINK eVe WINK”, NAS devices, External Graphics Rendering devices, etc…) applying some Edge Computing concepts to the mix where each device can contribute to the compute/storage pool.
That’s because our R&D team with experts on visual arts, human machine interfaces engineers, augmented / virtual reality developers and hardware hackers came fruition with a portable and modular system with state of the art Low Power all programmable heterogeneous multi-processing SoC (Zynq UltraScale+ EV MPSoC) that brings us the ability of control and sync high amount of data thanks of its High Speed Transceivers on the FPGA side of the SoC.
With a quad-core ARM® Cortex-A53 platform running up to 1.5GHz. Combined with dual-core Cortex-R5 real-time processors, a Mali-400 MP2 graphics processing unit and an integrated H.264 / H.265 video codec capable of simultaneous encode and decode up to 4Kx2K (60fps) we are capable of control with our firmware and software solution all the networking, charging and video signalling needs of that use cases scenarios with a sufficient abstraction and hardware architecture agnostic approach independent of the operative systems of the devices.
Like I said before we are in “the areas of scientific research, engineering simulations and multimedia/transmedia professionals that relay on high bandwidth network transfers of Data/footage for being stored/recorded/edited/trans-coded/rendered locally or even streamed over the internet to the cloud” even we have one case study on Industry 4.0 scenarios were Machine Vision algorithms were applied on some high speed camera array for object recognition an counting of millions of parts or quality assurence of the assembled parts. Our Software stack is capable of bring the Machine Learnig and Deep Neural Networks algoritms locally on our Device and distribute the parallel processing load on the compute/render nodes conected to the high speed links.
We are on conversations right now with the Cambrionix folks that made some cool OEM stack-able modules with 4 USB-C ports up to 16 USB-C “oneuniquecable” worth of power delivery, high speed data transfer and video signalling with a web based monitoring and management of every aspect of each port: https://cambrionix.com/products/powersync4-usb-type-c-power-delivery-oems/
For the SoC approach we have decided to go with the main manufacturer of the Axiom Board: www.seco.com and build a baseboard for their SMARC 2.0 MODULE when they figured out what are happening with Congatec’s SMARC 2.0 Apollo Lake modules implementation of “USB 3.1 gen 2” USB-C ready specifications.
For the industrial design approach we have decided to go on the road of a “tablet mini pc” form factor like in the Ockel Sirius A: https://www.indiegogo.com/projects/ockel-sirius-a-the-world-s-most-versatile-mini-pc-mobile-design--2#/ with just not in a 6 inch size scale but in a maybe 10" size with a couple of Cambrionix PowerSync4 modules.
That’s all the state of the things right now. The objective of our project is to generate a Hardware Development Kit, in the format of an folding and portable all-in-one PC/RenderFarm/PowerWall/LightStageScanner/micro CAVE system economical and flexible enough to be acquired by institutions, universities or groups of people in general within the Hacker / Maker culture or Prosumers who want to experiment with these technologies.
Our Open Source Software Development is based on OpenSceneGraph with Equalizer over a Chromium OS base and we are optimizing the system to release the code as soon as we can do our tests with the final hardware.
Soon we traveled to Montevideo Uruguay for a few months to follow the following stages of Hardware Development with the following prototypes with the people of SinergiaTech, which are the first and only hardware accelerator in Latin America and are interested in developing everything with us all the way with this venture in a Ultra Fast Company Builder Track, where we can have the opportunity to collaborate and work together with Alvaro Cassinelli who has been our mentor from the beginning.
If all goes well, at the end of the year we will be starting a Crowdfunding campaign at Crowdsupply to start making the kits on a larger scale as we are also competing on the last stages of the HackaDay Prize 2017.
Hoping to be able to collaborate with us in this next exciting year for the technologies of Augmented Reality and Mixed Realities.
Transversal Dimensions .LLC