For the last year there's been a good deal of information published on how to use the sensor APIs for Windows 8 device. However if you are interesting in having portable HTML5 code leveraging device sensors typically required a native approach or 3rd party proprietary solution. With a bit of work I found you can leverage new device orientation event listeners in javascript that will mostly duplicate the Windows 8 native device sensor APIs . And because HTML5 allows you to swap code on the fly you can easily leverage both the native sensor APIs for Windows and the html5 device orientation APIs depending on the device that is running the code. In other words, the same code you have in Visual Studio can be hosted on the web and work for an Android phone or tablet. Cool stuff!
Benefit of Coding in Javascript Javascript is fastly becoming a highly portable language that can be used to call cross platform web browser instructions or native APIs for a particular OS. Windows 8 allows you to compile a native app with javascript. A neat possibility of this is the exact same code can be hosted and run by mobile devices or legacy PCs in a browser. A problem however is very apparent when you want to leverage device specific APIs for sensors like the gyrometer , and accelerometer , etc. While these Windows 8 classes are awesomely powerful to access via javascript, only Windows can execute them. Thankfully the HTML5 events are catching up and can allow you to capture data from the device to get the device rotation information across all 3 axis.
Image may be NSFW. Clik here to view.
With just a bit of work you can tweak the data to mimic the native sensor APIs, and plug into your core code, creating a seamless experience across devices and form factors. Note you may be able to do this using PhoneGap and other 3rd party solutions, however that is brokering the solution to another entity. That might be good, that might not. I’m certain that point is up for debate
Device Orientation Browser Compatibility
This is a fairly new event listener, but it is pretty well adopted and can be used on Chrome, Firefox and Opera and their mobile counterparts. Note IE10 does not support, however as I show you can swap out the Windows 8 sensor events with HTML5 sensor events and visa versa. Check out the graph of support from http://caniuse.com/deviceorientation. Image may be NSFW. Clik here to view.
Sample Code - Sensor Event Listener in Windows 8 vs HTML5
In the below code I’m showing how I get data from the Windows 8 Sensor API to turn move and rotate an object in my game. With my example app I use the variable “webapp” to determine which code to execute. Note in this use case I'm reading sensors and assigning data to 3 variables: xAngle, yAngle and zAngle. xAngle is usedto alter the X position of my object on Canvas. yAngle alters the Y Position of an object in Canvas, and zAngle rotates my object, like a spinning top, either left of right.
if (webapp == false) { // use Windows 8 Sensor API
gyrometer = Windows.Devices.Sensors.Gyrometer.getDefault();
gyrometer.addEventListener("readingchanged", onGyroReadingChanged);
accelerometer = Windows.Devices.Sensors.Accelerometer.getDefault();
accelerometer.addEventListener("readingchanged", onAccReadingChanged);
}
function onGyroReadingChanged(e) { // gets data for rotation around Z Axis and assigns to zAngle
var accelZ = e.reading.angularVelocityZ;
zAngle = -accelZ.toFixed(2);
}
function onAccReadingChanged(e) { //gets the tilt information and assigns to xAngle and yAngle
var inclX = e.reading.accelerationY.toFixed(2) * -90;
var inclY = e.reading.accelerationX.toFixed(2) * 90;
xAngle = inclY;
yAngle = holdAngle + inclX; // hold angle is read or set in calibration function
}
Here is the base HTML5 device orientation version of that code. Note this isn't done, read further to understand how I have to adjust this.
if (webapp == true) { // use HTML5 device orientation event listener
window.addEventListener('deviceorientation', capture_orientation, false);
}
function capture_orientation(event) { //set input for web browser orientation sensors
var alpha = event.alpha;
var beta = event.beta;
var gamma = event.gamma;
alphaAngle = alpha.toFixed(2) //set the alpha number to an integer
xAngle = gamma;
yAngle = holdAngle + beta;;
zAngle = -alphadelta*5 ;
}
Solving issues between Windows 8 Sensor API and HTML5 device orientation
Defining a normal Z-Axis orientation: One issues you may have is with the Z axis rotation (gyrometer vs alpha). Unless your use case is a compass, you will find that there is no "normal" orientation for the Z axis. For example the X & Y axis rotation, you can assume that the X & Y plane are at a normal or default position if parallel to the plane of the earth (flat on a table). Thus if your device is tilted on its side or pitched forward your app might rotate something. However a user can be holding the device anywhere on across the Z axis and assume that experience should give them a normal or default experience. In other words if you are facing North instead of East when you start your app, for most applications you assume it’s going to be the same experience. Thus a key to having the Z axis orientation work in your app is to turn that axis orientation into accelerated data. That way you know the difference or speed the device is moving around a Z axis. In other words a still device that is not spinning, is your "normal" or default orientation for the Z axis, and the data you want is how fast and in what direction are you rotating on the Z axis. .
The device orientation event handler, however, does not provide that accelerated data directly. You will have to interpret the difference in the change of the data to get something like the accelerated Z axis spin. However once you do this, the data is very comparable to the gyrometer data you get from the Windows 8 native sensor API. To fix this I determine the difference between the previous or last Alpha orientation and the current Alpha orientation. That gives me a number that almost exactly matches the gyrometer acceleration data I get from my native code. Here’s an example this would replace the last line of our capture_orientation function
if (alphainit < 1) { //we don’t have a lastAlpha reading so we need it to equal alpha the very 1st time
lastAlpha = alphaAngle;
alphainit = 1; // now have the first alpha so this code won’t run again
}
alphadelta = alphaAngle - lastAlpha; //determine the delta difference current and last alpha
lastAlpha = alphaAngle; //sets lastAlpha value
zAngle = -alphadelta * 5 // this is the same as before
}
Swapped X & Y Axis: Another issue is that you’ll find for phones the X & Y axis (beta and gamma) data is swapped compared to tablet and PC devices. Perhaps the default or "normal" orientation on a phone is considered portrait, and thus why beta and gamma are reversed. For you it means that you will have to swap the gamma and beta data if you want the experience to be consistent in landscape mode across form factors.
To manage this situation I created a variable called “mobile” and when “mobile==true” we swap of the beta and gamma data. The following code replaced our” var beta =” and “var gamma =” lines in our capture_orientation function.
if (mobile == true) { //swap beta and gamma for mobile browsers
var beta = event.gamma*-1;
var gamma = event.beta;
}
else {
var beta = event.beta;
var gamma = event.gamma;
}
Managing browser nuances: As with any web application you will have to get some information on the device, on its browser and adjust some variables. The more devices you can test your app against the more bullet proof you can make the experience. The good thing is you only need to edit a small set of code to manage what code should be turned on or off depending on the device and browser. Here is an example of my config.js which does this. With it I determine information about the device and browser then I can set variables to be true or false which tailor the code to that device. For example if the device is not a PC or tablet I set the variable mobile to be true. If it is not running MSIE (Microsoft Internet Explorer) then this is being run in a browser and set the webapp variable to be true
var str2 = navigator.platform;
var str3 = navigator.userAgent;
if (str3.indexOf("MSIE") >= 0) { //IE browser based so Windows 8 APIs
var webapp = false;
var tabletmode = false;
var mobile = false;
}
else { //run as a webapp and use device orientation
var webapp = true;
var tabletmode = true;
}
if (str2.indexOf("Win") >= 0 || str.indexOf("Opera") >= 0 || str2.indexOf("686") >= 0) {
//If Windows, or Opera we will not reverse the X & Y
var mobile = false;
}
else { // this is likely a phone and we need to reverse the X & Y
var mobile = true;
var tabletmode = true;
}
Try it out. Check out my test game via my pubic dropbox link. http://db.tt/4ch0jZJ4. If you have a new PC with orientation sensors try this in Chrome, then also try on an Android tablet or phone. Take note, if running on Android. This is the exact same code I used to compile for Windows 8 running in your browser. Now if you have issues let me know. The more browsers and devices test the more I can optimizes the code to accommodate, which is a benefit to javascript and HTML5.
Hi everyone! This year I have been asked to make a comeback at IDF, the Intel Developer Forum! As my blog readers know, I work on many interesting projects as a one man development team: Meshcentral.com, Manageability Developer Tool Kit (DTK), Intel System Defense Utility (ISDU) and the Intel Developer Tools for UPnP Technologies. Many of these projects make direct use of unique Intel platform technologies like: Intel Active Management Technology (Intel AMT), Intel Remote Wake, Intel Identity Protection Technology (Intel IPT), Digital Random Generator, AES-IN, Wake-on-LAN, etc. So, I am in a pretty good position to share with developers my experiences and help more people use these great platform features.
This year, I am giving one session (1 hour) and one lab (2 hour). The lab is given twice, so the program will show two, two-hour blocks. Here is my schedule as currently planned:
In both classes the goal is the same: show that Intel platforms are great at connecting to the cloud. Lets say you want to connect a device to a server in the cloud. We are going to look at how it's usualy done with a regular client-to-server connection. Then we are going to leverage all the Intel platform technologies at our disposal to add many more capabilities to our cloud service. The session will be a quick overview, I will demonstrate the benefits of using platform features and show you how to get started quickly and what code we have already available. The labs are typically smaller, more indepth and much more interactive, I get to answer questions that can help developers in their day-to-day work, show where to get source code, how to get started and much more. The entire lab will be demos, code & fun!
I look forward to seeing you there. To register or for more information, links below:
Estamos disponibilizando abaixo as apresentações feitas pela Intel durante o CONSEGI 2013 em Brasília.
Desenvolvimento Apps multiplataforma para dispositivos móveis usando HTML5
O HTML5 tem se consolidado como linguagem de programação com elevado grau de portabilidade e compatibilidade crescente entre diferentes navegadores, sistemas operacionais e tipos de dispositivos computacionais suportados. Durante a palestra pretendemos apresentar um overview do que o HTML5 pode propiciar aos desenvolvedores de Apps e apresentar ferramentas, bibliotecas e exemplos de aplicações multiplataforma com HTML5.
Livros eletrônicos interativos baseados em padrões abertos (ePub3 e HTML5)
Com a utilização dos padrões abertos ePUB versão 3 e HTML5 é possivel o desenvolvimento de livros eletrônicos interativos, que podem ser lidos em diversos dispositivos com elevado grau de portabilidade. A utilização conjunta das duas tecnologias possibilita o desenvolvimento de livros interativos de forma inédita, embarcando aplicações e conteúdos HTML5 dentro de livros eletrônicos tradicionais, apresentando diferenciais importantes principalmente para livros educacionais. Na palestra serão apresentados os dois padrões e será demonstrado o processo de criação de um livro interativo, bem como apresentados alguns exemplos já desenvolvidos.
O crescimento explosivo de dispositivos computacionais e gadgets significa que nenhuma estratégia para o desenvolvimento de Apps está completa sem considerar uma abordagem multiplataforma, para cobrir todo o espectro de dispositivos de forma mais rápida, efetiva e com menor custo.
É por este motivo que gestores de TI e desenvolvedores de apps abraçaram as tecnologias abertas da web como o HTML5, CSS e JavaScript como o terceiro maior ecossistema para o desenvolvimento de apps. Isso é possível, pois o HTML5 é suportado por bilhões de dispositivos e todas as principais plataformas existentes possuem navegadores e runtime com suporte ao HTML5.
O HTML5 é aberto, eficiente e poderosamente flexível, mas você precisa saber como tirar o melhor dele. Por isso nós criamos esta nova série de discussões, o Inside the Brackets.
Agora você terá a oportunidade de ouvir a opinião de especialistas da indústria de computação, enquanto eles discutem e debatem as oportunidades, desafios e melhores práticas para o desenvolvimento multiplataforma com HTML5.
Registre-se agora para garantir o seu lugar nesta mesa, e ter uma visão sobre o HTML5 direto de dentro da industria.
Nosso primeiro episódio - HTML5? Why I Oughta …
Noss primeiro episódio desta série vai ao ar ao vivo no dia 27 de Agosto ao meio dia e contará com a participação de especialistas da Adobe, Intel e Evans Data para discutir o crescimento do HTML5 como plataforma para o desenvolvimento de apps, os motivos pelos quais o HTML5 é importante para gestores de TI e desenvolvedores e o porque você deve se preocupar também, tudo isso seguido por uma sessão ao vivo de perguntas e respostas.
Os próximos episódios irão incluir tópicos como HTML5 vs desenvolvimento nativo, ferramentas e recursos para o HTML5 e HTML5 dentro de ambientes corporativos.
Links e Informações
Pronto para o Inside the brackets? Registre-se aqui para participar.
I just added certificate based Intel AMT cloud activation support (TLS-PKI) in Meshcentral.com that works behind NAT’s and HTTP proxies, uses a reusable USB key and makes use of Intel AMT one-time-password (OTP) for improved security.
Ok, let’s back up a little. Computers with Intel AMT need the feature activated before it can be used. Historically it’s been difficult to setup the software, network, certificates and settings to start activating Intel AMT, especially for smaller businesses in a way that allows administrators to use all of its features. It’s even more difficult if all the computers are mobile. With Mesh, we want to put all of the Intel AMT activation in the cloud, so administrators don’t need to worry about the how it all works. Administrators can launch their own instance of Mesh on Amazon AWS, install the mesh agent on each their machines and, when time permits create and use a single USB key to touch each machine for Intel AMT activation.
Meshcentral.com will automatically detect when a computer can be activated and do all of the appropriate work in the background, and this, even behind a HTTP proxy or NAT/double-NAT routers. Mesh fully supports Intel AMT Client Initiated Remote Access (CIRA) so once activated, Intel AMT can call back to the Mesh server independent of OS state. Administrators can then use the web site or tools like Manageability Commander Mesh Edition to use Intel AMT features across network obstacles. Mesh will automatically route traffic using direct, relay or CIRA, so administrators don’t never need to worry about how to connect to a machine over the Internet. As an aside, Mesh fully supports Host Based Provisioning, so that is still an available option if you don’t want to touch using a USB key and are ok with the client mode limitations.
Project Anarchy is a free mobile game engine for iOS, Android (including X-86), and Tizen. It includes Havok’s Vision Engine along with Havok Physics, Havok Animation Studio and Havok AI. It has an extensible C++ architecture, optimized mobile rendering, a flexible asset management system, and Lua scripting and debugging. There are also complete game samples included with the SDK along with extensive courseware on the Project Anarchy site that game developers can use to quickly get up to speed with the engine and bring their game ideas to life.
Ship for FREE on iOS, Android (including X-86) & Tizen Includes Havok Vision Engine together with access to Havok’s industry-leading suite of Physics, Animation and AI tools as used in cutting-edge franchises such as The Elder Scrolls®, Halo®, Assassin’s Creed®, Uncharted™ and Skylanders™.
Extensible C++ plugin-based architecture
Comprehensive game samples with full source art and source code
Focus on community with forums for support, Q&A, feedback and hands-on training
NO commercial restrictions on company size or revenue
Upgrades for additional platforms and products, source and support available
Includes FMOD, the industry’s leading audio tool
See the attached Product Document ( Havok_Anarchy_2013.pdf)
[Opinion: The Ridiculous Tablet vs. PC Debate that wasn't]
Let me just get this out of the way so you know where I stand. Tablets are another PC form factor. It's just that simple. To claim otherwise comes off as trying to be sensationalistic to sell a story, naive, or possibly disingenous. Sound too harsh? Well allow me to explain as I'd rather not be on the side that's confusing and obfuscating what's really going on inside the complexities of the PC market. Let's dissect what a PC is and why a Tablet is one.
PC = Personal Computer. It's "Personal" in the sense that you consume/produce digital activities on it to either access, store, or produce something that can be unique to you. The "Computer" part is that it's a piece of hardware digitally crunching software code via inputs and outputs. Doesn't matter if it's to a display, by a keyboard, a gesture, a mouse, or voice recognition. It's obviously more complex than this but you get the general idea.
OS = Operating System. The OS is responsible for bridging the gap and communicating between the hardware capabilities, and what the software is telling it to do. (e.g. Windows, MacOS, Linux ((Ubuntu, Red Hat)), Android, etc). This is somewhat chicken and egg with the form that the device looks like; but I list the OS first because without it the hardware is pretty much a brick or boat anchor at that point.
Next we have what I call the FF = Form Factor. PC's ~40-50 years ago used to look drastically different. They used to look more like Server farms than today's Tablets, Ultrabooks, iMacs etc. The point is simply this. A PC can come in almost any conceiveable shape and size you can imagine. Obviously the shapes and sizes we see today make the most sense given our lifestyles, the way we work and play etc. Here are a few things we can bank on in the future.
~40-50 years from now PCs are likely going to look drastically different than they do today. If the past is any indication for the future then the following assumptions can be made: More poweful, longer battery life, mostly mobile, bigger storage, thinner, lighter, and smaller.
Convergence. Not everything will converge; but let's face it, when you look at what's happened with point and shoot digital cameras, GPS devices, digital music players, the 'dumb' phones, and so forth; there's a strong case for digital devices converging more, and not less. Most of these devices are both getting 1) Smarter, and 2) Connected.
Commoditization. Remember the prices of PCs from say 30, 20, or even 10 years ago? Well... for the most part they're getting cheaper.
Apps = Software Applications. There's a limitless volume of Apps out there as well. This can be anything ranging from: Surfing the Internet via Firefox, Google, IE, to emailing/texting/skyping friends, to playing games, to working, watching a movie/tv, listening to music, and so forth.
In all of these key cases the OS, the Form Factor, and the Apps continue to evolve whenever advancements are being made in either the coding languages, the components that make up the various form factors, and the software apps that we interoperate with.
Tablets still allow us as users to do most of the Software Applications we've all come to know and love. Can you still surf the internet? Play a game? Do an email? Chances are yes. More robust and capable Tablets such as the MS Surface Pro allow you to do anything you normally would be able to do at work or for leisure. Lastly; when you crack it open you still see these newer devices being powered by some 'x' processor, (e.g. ARM, AMD, Intel, etc), there's still a motherboard, memory, & typically a drive. (Solidstate or otherwise). All you're seeing is just another evolutionary branch on the tree of PC.
Here's one picture of how I like to illustrate it
. Image may be NSFW. Clik here to view.
So there you have it. I think the next steps we'll see in the evolution of PC's will be credit card sized PCs, perhaps some wearables, and much smarter PC devices that we can interact with more. The future is exciting indeed and the PC, in all its myriad and evolving forms, is bound to be with us for a very long time.
I'll summarize it like this to the Press, Analysts, Researchers, etc. Please stop confusing the form with the function. The only thing dying right now isn't the PC but rather the single purpose and 'dumb' devices.
I hope you enjoyed this piece. If you disagree or agree I'd love to hear your thoughts.
Image may be NSFW. Clik here to view. Prend maintenant en charge les systèmes hôtes Apple OS X* et Microsoft Windows* 7 et 8
Développement accéléré d’applications Android pour appareils utilisant des processeurs ARM* et Intel® Atom™
Beacon Mountain offre des outils de conception, de codage et de débogage conçus pour la productivité pour les applications natives ciblant des appareils basés sur des processeurs ARM et Intel Atom fonctionnant sous Android, y compris des smartphones et des tablettes. Les outils sont compatibles avec Eclipse et prennent en charge des SDK Android populaires, y compris Android NDK.
Caractéristiques principales :
Installation simple et rapide d’outils de développement populaires Intel® et tiers pour la création d’applications Android
Compatible avec les boîtes à outils Android SDK NDK et les augmente
Prend en charge les systèmes hôtes Apple OS X* et Microsoft Windows* 7 et 8
all
Accédez à la dernière documentation et aux circuits d'assistance qui faciliteront votre développement.
Analyseur de système Intel® Graphics Performance Analyzers
Aperçu de Intel® Integrated Performance Primitives pour Android*
Intel® Threading Building Blocks
Intel® Software Manager
Outils tiers pour x86 et ARM* :
Google Android SDK (ADT Bundle)
Android NDK
Environnement de développement intégré Eclipse
Android Design
Cygwin* (pour systèmes d’exploitation Windows)
Outils système
Outils Intel pour prise en charge de systèmes d’exploitation x86 : Android* Jelly Bean 4.2
Outils tiers pour prise en charge de systèmes d’exploitation ARM* : Android* Gingerbread 2.3 et versions supérieures
À propos de Beacon Mountain pour Android*
Avis sur l’exportation
Ce logiciel est soumis aux réglementations des exportations et à d'autres lois des États-Unis et il ne peut pas être exporté ou réexporté vers certains pays (Birmanie, Cuba, Iran, Lybie, Corée du Nord, Soudan et Syrie) ou vers des personnes ou entités frappées de l'interdiction de recevoir des exportations des États-Unis (les personnes interdites, ressortissants spécialement interdits et les entités figurant dans la Bureau of Export Administration Entity List, ou impliquées dans la technologie de missiles ou l'armement nucléaire, chimique ou biologique).
HTML5 is the new HTML standard. Recently, Intel Corporation announced a set of HTML5 Tools for developing mobile applications. This paper shows you how to port an Apple iOS* accelerometer app to HTML5 using these tools. Please note: Auto-generated code created by the XDK may contain code licensed under one or more of the licenses detailed in Appendix A of this document. Please refer to the XDK output for details on which libraries are used to enable your application.
Intel® HTML5 App Porter Tool
The first thing we’ll do is take an iOS accelerometer app and convert the Objective-C*source code to HTML5. We’ll do this using the Intel® HTML5 App Porter Tool and the source code found here: [iOS_source.zip] (Note: IOS_source sample code is provided under the Intel Sample Software License detailed in Appendix B).You can download the Intel HTML5 App Porter Tool from the Tools tab here: http://software.intel.com/en-us/html5. After filling in and submitting the form with your e-mail address, you will get links for downloading this tool. The instructions for how to use this tool can be found on this site http://software.intel.com/en-us/articles/tutorial-creating-an-html5-app-from-a-native-ios-project-with-intel-html5-app-porter-tool.
When you are finished performing all the steps, you will get HTML5 source code.
Intel® XDK
You can open the HTML5 code in any IDE. Intel offers you a convenient tool for developing HTML5 applications: Intel® XDK – Cross platform development kit (http://html5dev-software.intel.com/). With Intel XDK, developers can write a single source code for deployment on many devices. What is particularly good is it is not necessary to install it on your computer. You can install it as an extension for Google Chrome*. If you use another browser, you have to download a JavaScript* file and run it. Sometimes it’s necessary to update Java*.
After installing Intel XDK, you will see the main window:
Image may be NSFW. Clik here to view.
If you want to port existing code, press the big “Start new” button.
If you’re creating a new project, enter the Project Name and check “Create your own from scratch,” as shown in the screen shot below.
Image may be NSFW. Clik here to view.
Check “Use a blank project.” Wait a bit, and you will see the message “Application Created Successfully!”
Click “Open project folder.”
Image may be NSFW. Clik here to view.
Remove all files from this folder and copy the ported files. We haven’t quite ported the accelerometer app yet. We still have to write an interface for it. It is possible to remove the hooks created by the Intel HTML5 App Porter tool. Remove these files:
todo_api_application__uiaccelerometerdelegate.js
todo_api_application_uiacceleration.js
todo_api_application_uiaccelerometer.js
todo_api_js_c_global.js
To update the project in Intel XDK, go to the editor window in the Windows emulator.
Open the index.html file and remove the lines left from the included files.
Image may be NSFW. Clik here to view.
Open the todo_api_application_appdelegate.js fileand implement the unmapped “window” property of the delegate.
application.AppDelegate.prototype.setWindow = function(arg1) {
// ================================================================
// REFERENCES TO THIS FUNCTION:
// line(17): C:WorkBloggingechuraevAccelerometerAccelerometerAppDelegate.m
// In scope: AppDelegate.application_didFinishLaunchingWithOptions
// Actual arguments types: [*js.APT.View]
// Expected return type: [unknown type]
//
//if (APT.Global.THROW_IF_NOT_IMPLEMENTED)
//{
// TODO remove exception handling when implementing this method
// throw "Not implemented function: application.AppDelegate.setWindow";
//}
this._window = arg1;
};
application.AppDelegate.prototype.window = function() {
// ================================================================
// REFERENCES TO THIS FUNCTION:
// line(20): C:WorkBloggingechuraevAccelerometerAccelerometerAppDelegate.m
// In scope: AppDelegate.application_didFinishLaunchingWithOptions
// Actual arguments types: none
// Expected return type: [unknown type]
//
// line(21): C:WorkBloggingechuraevAccelerometerAccelerometerAppDelegate.m
// In scope: AppDelegate.application_didFinishLaunchingWithOptions
// Actual arguments types: none
// Expected return type: [unknown type]
//
//if (APT.Global.THROW_IF_NOT_IMPLEMENTED)
//{
// TODO remove exception handling when implementing this method
// throw "Not implemented function: application.AppDelegate.window";
//}
return this._window;
};
Open the viewcontroller.js file. Remove all the functions used for working with the accelerometer in the old iOS app. In the end we get this file:
In the ViewController_View_774585933.css file, we have to change styles of element colors from black to white to be readable on the black background: color: rgba(0,0,0,1); à color: rgba(256,256,256,1);. As a result we get:
To code the accelerometer functions, we need to use the appMobi JavaScript Library. Documentation for this library can be found here. It’s installed when you download Intel XDK.
Open the index.html file and add this line into the list of scripts:
One of the more challenging user experiences in a game is the need to move AND aim a player on the screen. That gets harder with mobile devices, where you have limited controller options. One way to fix this, is to allow the user to tap where your character should aim than have him turn in that direction. Think of a gun turret or a spaceship where all you do is tap on enemies and the turret or spaceship turns and fires in that direction.
To do this this you simply need to know what direction your object or "Actor" is currently pointing, and the location of the the object want to point at. With a single line of Trigonometry it's a simple thing to code.
Here's is a quick Vine video on this approach (roll over your mouse to view)
Below are the steps to do this with code
Step 1. Figure A. : Canvas will rotate in radians. So rather than 360 degrees in a circle think the radians as and expression of PI. Half way around a circle is 3.14 radians, and all the around is 6.28 radians, or 2PI. So at any given point you should know your Actors Radian angle in Canvas. In my code I simply increment (spin) the canvas by +1 or -1 each frame. Thus it is easier for my "spin' variable to work with 360 degrees for a smooth animation. So I calculate from radians to degrees and back to radians in my code.
shipRadian=(spin * Math.PI/180); // this will turn 0-360 degrees into a radians from 0-6.28
context.rotate(shipRadian); //this will rotate the canvas to a radian
Step 2. Figure B. To point your Actor at an enemy or a touch event on the screen, you need to calculate the radian angle of that enemy in relation to the Actor. You can use a simple javascript math expression to do this "Math.atan2(DeltaY, DeltaX)". In my example, the touch event will place a crosshair symbol on the screen, and we will then fire at that crosshair. The code to calculate the radian we need looks like this.
Step3: Figure C & D: The next thing to do is to subtract the new radian value by your actors radian. That delta will give you a number either smaller or larger than PI (3.14). Now for the best animation we want the actor in our scene to turn in a direction of the shortest path. Generally the way to consider this is if the delta radian is smaller than PI then the rotation toward the new radian value will be clockwise, if larger than PI the rotation toward the new radian value will need to be counterclockwise
Image may be NSFW. Clik here to view.
Code for Figure C & Figure D
if(xhairRadian<=0){ // The arctangent math calulates a negative radian for half of the radians. This turns the negative radian into is positive counterpart
xhairRadian=2*Math.PI+xhairRadian;
}
deltaRadian=xhairRadian-shipRadian // Determine the detla between the ship and new radian
if (deltaRadian < -Math.PI || deltaRadian > Math.PI){ // determine if the delta is beyond 3.14 or -.3.14, if so turn right i.e. clockwise
if(xhairRadian<shipRadian){
direction="right";
}
if(xhairRadian>shipRadian){
direction="left";
}
}
else { // else if the difference in angle is positive spin toward the right
if (xhairRadian > shipRadian) {
direction = "right";
}
if(xhairRadian<shipRadian){ // if the difference in angls is negative spin toward the left
direction="left";
}
}
shotstart=1; // shotstart = 1 means we've finished the calculations and are ready spin and shoot
}
}
Step4: Figure E: The next thing to do, is to start incrementing the canvas rotation in the proper direction. Through some testing I found a static rate of movement creates a problem. Either the ship takes too long to go around, and the action isn't good. Or the ship moves too quickly in short distances, and it looks choppy. To fix this, I add an accelerated speed to the rotation, where each frame I increase the speed until it his a max speed. That creates fast and smooth action.
var speedmax =20; // our top rate of speed;
if (shotstart==1){ //if the shot was made start to spin the ship
if (direction=="left"){
spinspeed--; //if not at top speed then increase the speed of the ship turning in the negative direction
if (spinspeed<(speedmax*-1)){
spinspeed = (speedmax * -1); //if you hit top speed don't increase the speed anymore
}
}
else {
spinspeed++; //if not at top speed then increase the speed of the ship turning in the positive direction
if (spinspeed > speedmax) { //if you hit top speed don't increase the speed anymore
spinspeed = speedmax;
}
spin+=spinspeed; // our spin number increases by the rate of spin
spinspeed *= 1.6; // increase the spin rate by 60% each frame
}
Step 5 Figure F: Because our randians and degrees go from 0 to 6.28 and 0 to 360, when you rotate counter clockwise and pass Zero you need to change the math. Since our"spin" variable is in degrees when we pass Zero we need to shift plus 360 rather than going negative. Also if going clockwise after you pass 360 you need to go to Zero rather than counting up pass 360. To manage this you'll need a piece of code that will either subtract or add 360 to the current spin, depending on the direction.
if (spin >= 360) { //if you've come all the way around, reset the spin by 360
spin = spin - 360;
}
if (spin <= 0) { //if you've come all the way around, reset the spin by 360
spin=spin+360;
}
Image may be NSFW. Clik here to view.
Step 6. Figure G. Ultimately when we get the Actor radian to match the new radian we want to stop spinning. However when dealing with math of fractional numbers it is hard to get a number to exactly equal another number. To make this easier all we do is add a buffer amount aound our target radian value. Thus if our Actor is close enough to pointing in the right direction we can go ahead and make it equal the target radian.
if (spinRound >=xhairRadianround-0.5 && spinRound <= xhairRadianround +0.5 || spinRound >Math.PI*2 || spinRound <0) {;
//if the ships close enough to the proper angle no need to animate just point the ship at the cursor
shipRadian=xhairRadian;
spinspeed = spindefault;
shotstart=0;
}
else
{ //if the angle is far enough off start to spin the ship
shipRadian=(spin * Math.PI/180)
}
Step 7. Figure H. When we've done this we've completed the task of rotating our Actor in the correct direction. With this done you can trigger the event, which in our case is firing a laser.
if (shipRadian==xhairRadian){ // we are pointed at the place we tapped, now fire the lasers
drawShot();
shotprogress=true; // flag to say we completed the drawing our lasers
}
That's all there is to this. Check it out yourself on my public drop box: Launch Example Open this with any HTML5 compatible device and tap around. View and copy the source to play with your own version.
Developing applications is important for Intel® processor-based mobile platforms to be successful. For platform engineers and application engineers who want to enable applications as much as possible on Intel platform, there are no source code for applications from third-party ISV (e.g., Google ), there is a big question about how to debug these no source code applications on intel platform.
This document shows how the debugging experience, detailed methology, and tool usage for debugging no–source-code third-party applications on Intel processor-based platforms.
Debug Tricks
Call Stack
Description: The call stack is important for debugging because it tells you where the bug occurs in the source code. It’s a running history, if you will. There are call stacks for Java* space and native space and different ways to print them as the following paragraphs show.
Print Java Space Call Stack:
Method that will not break the program which you are debugging.
Method that will not break the program which you are debugging.
include <utils callstack="" h="">
using namespace android;
namespace android {
void get_backtrace()
{
CallStack stack;
stack.update();
stack.dump("");
}
};
Method that will break the program, so do not use until it if necessary
int* p = NULL;
*p = 0x8888;
Print Stack from Native Space to Java Space
Apply patch 0001-Dalvik-add-support-of-print-Java-Stack-from-Native-s.patch into Dalvik project.
Make Dalvik project and push libdvm.so into /system/lib on the device.
After reboot, you can use Dalvik’s interface in two ways to dump the stack from native space to Java space of the process into the /sdcard/logs/ javastack file
By shell command:
kill -31 <pid>
By API Interface:
Add sentence “kill(getpid(),31);” at that point in the source code where you want to dump the stack from native space to java space
Check the Java stack in /sdcard/logs/ javastack on the device. You can find the whole call stack from native space to java space, then you will know what java function and native library are called.
root@android:/sdcard/logs # cat javastack
----- pid 25653 at 1982-01-01 02:15:14 -----
Cmd line: com.android.providers.calendar
DALVIK THREADS:
(mutexes: tll=0 tsl=0 tscl=0 ghl=0)
"main" prio=5 tid=1 NATIVE
| group="main" sCount=0 dsCount=0 obj=0x417c2550 self=0x417b2af0
| sysTid=25653 nice=0 sched=0/0 cgrp=apps handle=1074057536
| schedstat=( 13633356 12645753 23 ) utm=0 stm=1 core=1
#00 pc 000b01ad /system/lib/libdvm.so
#01 pc 000907ee /system/lib/libdvm.so
#02 pc 00091ad4 /system/lib/libdvm.so
#03 pc 0008a33d /system/lib/libdvm.so
#04 pc 00000400 [vdso]
at android.view.Display.init(Native Method)
at android.view.Display.<init>(Display.java:57)
at android.view.WindowManagerImpl.getDefaultDisplay(WindowManagerImpl.java:630)
at android.app.ActivityThread.getDisplayMetricsLocked(ActivityThread.java:1530)
at android.app.ActivityThread.applyConfigurationToResourcesLocked(ActivityThread.java:3649)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:3969)
at android.app.ActivityThread.access$1300(ActivityThread.java:130)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1255)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4745)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553)
at dalvik.system.NativeStart.main(Native Method)
patch 0001-systemcore-add-Dalvik-Tombstone-call-stack-support.patch for system/core project is optional , it just add tomestone with print java stack into /sdcard/logs/ javastack.
procrank: process memory rank
procmem: a specific process’ memory
showslab: kernel slab utilization, /proc/slabinfo
latencytop: CONFIG_LATENCYTOP
showmap: process memory mmap address space; /proc/XXX/maps
dumpstate- system information like memory , cpu etc
dumpsys – system service information etc
to see all of the "subcommands" of dumpsys do:
dumpsys | grep DUMP
DUMP OF SERVICE SurfaceFlinger:
DUMP OF SERVICE accessibility:
DUMP OF SERVICE account:
DUMP OF SERVICE activity:
DUMP OF SERVICE alarm:
DUMP OF SERVICE appwidget:
DUMP OF SERVICE audio:
DUMP OF SERVICE backup:
DUMP OF SERVICE battery:
DUMP OF SERVICE batteryinfo:
DUMP OF SERVICE clipboard:
DUMP OF SERVICE connectivity:
DUMP OF SERVICE content:
DUMP OF SERVICE cpuinfo:
DUMP OF SERVICE device_policy:
DUMP OF SERVICE devicestoragemonitor:
DUMP OF SERVICE diskstats:
DUMP OF SERVICE dropbox:
DUMP OF SERVICE entropy:
DUMP OF SERVICE hardware:
DUMP OF SERVICE input_method:
DUMP OF SERVICE iphonesubinfo:
DUMP OF SERVICE isms:
DUMP OF SERVICE location:
DUMP OF SERVICE media.audio_flinger:
DUMP OF SERVICE media.audio_policy:
DUMP OF SERVICE media.player:
DUMP OF SERVICE meminfo:
DUMP OF SERVICE mount:
DUMP OF SERVICE netstat:
DUMP OF SERVICE network_management:
DUMP OF SERVICE notification:
DUMP OF SERVICE package:
Permission [android.permission.DUMP] (49f43060):
perm=Permission{49fc39e0 android.permission.DUMP}
android.permission.DUMP
DUMP OF SERVICE permission:
DUMP OF SERVICE phone:
DUMP OF SERVICE power:
DUMP OF SERVICE reboot:
DUMP OF SERVICE screenshot:
DUMP OF SERVICE search:
DUMP OF SERVICE sensor:
DUMP OF SERVICE simphonebook:
DUMP OF SERVICE statusbar:
DUMP OF SERVICE telephony.registry:
DUMP OF SERVICE throttle:
DUMP OF SERVICE usagestats:
DUMP OF SERVICE vibrator:
DUMP OF SERVICE wallpaper:
DUMP OF SERVICE wifi:
DUMP OF SERVICE window:
dumptcp – tcp/ip information
bugreport
Wakelock
Description: A locked wakelock, depending on its type, prevents the system from entering suspended or other low-power states. When creating a wakelock, you can select its type. If the type is set to WAKE_LOCK_SUSPEND, the wakelock prevents a full system suspend. If the type is WAKE_LOCK_IDLE, low-power states that cause large interrupt latencies, or that disable a set of interrupts, will not be entered from idle until the wakelocks are released. Unless the type is specified, this document refers to wakelocks with the type set to WAKE_LOCK_SUSPEND.
If the suspend operation has already started when locking a wakelock, it will abort the suspend operation as long it has not already reached the suspend_late stage. This means that locking a wakelock from an interrupt handler or a freezeable thread always works, but if you lock a wakelock from a suspend_late handler you must also return an error from that handler to abort suspend.
Debug Method: To check the wakelock status, use cat /proc/wakelocks name – the component that holds wakelock wake_count – the count of holding wakelock active_since – the time interval from the last time holding the wakelock
Image may be NSFW. Clik here to view.
Tools: CPUSpy.apk –Use this application to get the device’s deep sleep time and to find out whether the device has a problem going into deep sleep. get_activewakelock.sh – Use this script to get the name and active_since columns from /proc/wakelocks Both CPUSpy.apk and get_activewakelock.sh are attached as following:
get_activewakelock.sh CPUSpy.apk
Miscellanous Debugging Tricks MethodTracing
Use MethodTracing to find hot spots and analyze performance. You can also check CPU usage, function call times, etc.
Follow these steps to do a trace:
import android.os.Debug;
……
android.os.Debug.startMethodTracing(“/data/tmp/test”); // create /data/tmp
…… // the program to be trace here
android.os.Debug.stopMethodTracing();
after running, there will be trace file in /data/tmp/test.trace
MAT link: http://www.eclipse.org/mat/downloads.php
Note:
The tool only shows Java spaceb, not native space, memory usage.
SamplingProfile
do sample at millisecond interval for routine, then output sample log.
Follow these steps to do sample profile
import dalvik.system.SamplingProfiler
……
SamplingProfile sp = SamplingProfiler.getInstance();
sp.start(n); // n is sample times
sp.logSnapshot(sp.snapshot());
……
sp.shutDown();
//there will be a sample thread to output information in logcat
System Signal
Use this tool to send system signal SIGQUIT and SIGUSR1 to Dalvik, which will handle these signals (dalvik/vm/SignalCatcher.c) to print the call stack or memory usage.
Follow these stpes to send system signal and get call stack
$ chmod 777 /data/anr -R
$ rm /data/anr/traces.txt
$ ps # find pid
$ kill -3 pid # send SIGQUIT to process to get tracefile
$ cat /data/anr/traces.txt
$ chmod 777 /data/misc -R
$ ps # find pid
$ kill -10 pid # send SIGQUIT to process to get hproffile
$ ls /data/misc/*.hprof
Logcat
Use this tool to get aplog print from android system.
You can use following methods to add aplog into or get aglog.
android.util.Log uses println for Jjava output with I/V/D….
Dalvik uses pipe and thread., Use dup2 to make stdoutand stderr re-direction to pipe (vm/StdioConverter.c:dvmstdioConverterStartup), start a thread to read pipe (dalvik/vm/StdioConverter.c:stdioconverterThreadStart()), then use the LOG tool to output the log into(system/core/liblog/logd_write.c: __android_log_print())/dev/log/*
The parameters for the logcat tool are:
# logcat -b main //show main buffer
# logcat -b radio //show radio buffer
# logcat -b events //show event buffer
jdwp (Java debug wire protocol)
The Java Debug Wire Protocol (JDWP) is the protocol used for communication between a debugger and the Java virtual machine (VM) which it debugs . In Android system, JDWP is proocal used between adb and java application on android device. Developer can use it for many debug proposal.
getLoadedClassCount()
printLoadedClasses() //it needs to open NDEBUG function
Debug Tools
Powerful debug tools help developer to root cause issue quickly and easily . This chapter will introduce typical android debug tools and technique about how to use them to root cause issues.
GDB
Print log is one way to debug Android apps, but it is inefficient and difficult to use.
Gdb is a good tool for debugging in single step and looking directly into source code issues. This section explains how to use the gdb tool on Android platforms.
Target Device Side:
gdbserver :<port> –attach <PID>
Host PC Side:
adb forward tcp:<port> tcp:<port>
cd <your code base root directory> , so gdb can find the source code in the current work path.
Run the command:gdb <program to debug> (the program should first be compiled with -g switch).
Start debugging the program.
Setup library patch with gdb command using these two commands:
#set solib-absolute-prefix < path of symbols> (be careful to not have a special sign in the patch ex: ~)
#set solib-search-path < path of lib under symbols >
To connect to the gdbserver on the target side, run <gdb> target remot :<port>. )
Note regarding the program/library with the debug symbol:Although defaultly android build system use “-g” switch to build native library with debug symbols, it strips the debug symbols at the last build stage. So to use native library with debug symbols, you need to use the one in the “out/target/product/symbols” directory.
gdb multi-thread debug command
Gdb tool also provide commands to debug multi-thread in one process, use the following commands to do:
info threads – print all thread information for the program you are debugging
thread <tid> – switch to debugging this thread with the specified ID.
break <file name>:<line> – set break point in source code file at the specified line. This command is very useful for system_servers that have many threads.
For example, following command will set a bread point in InputDispatcher thread of system_servers process
break InputDispatcher.cpp:1280, then continue,
To debug step by step, touch the screen at the point where you want, gdb will to stop the InputDispatcher thread.
set scheduler-locking off|on|step - When you debug mult-threads, you will find many other threads are running at the same time. To debug the current thread, use the “step”/”continue” command. By using “set scheduler-locking”, you can make your current debug thread the only running thread.
off – do not lock any thread, all threads are running, which is the default.
on – only the current debug thread is running.
step – doing debug step by step, except using “next” command, Only the current debug thread is running.
Debug Case: Debug service in system_server process
This debug case shows how to debug service thread in system_server process by Gdb tool
When you build a native library with ndk-build, the library with debug symbols is located in the obj directory. (library under lib directory is striped out of debug symbols ). In our case, it is /home/zwang/app/obj/local/x86, so we need to add this path into the library search path of gdb solib-search-path.
#target remote :1234 to connect with gdbserver
#break zwangjni_test_app.c : 12 set break point, you will get a message like “No source file named zwangjni_test_app.c, Make breakpoint pending on future shared library load?“ Y
#continue
After stopping at the break point in the native library,you can debug by steps.
Android Core dump file Analysis
When there are program exceptions, a core dump file will be created and located in /mnt/sdcard/data/logs/crashlogxx/xxxx_xxx_xxx.core. Use adb pull to copy the core dump file to the host PC.
To load the core dump file, run command: gdb <ics>/out/target/product/mfld_pr2/symbols/system/bin/app_process xxxx_xxx_xxx.core
Then you can use commands like bt, frame, up, down, and print to check the call stack when the program has exceptions.
TroubleShooting in Eclipse*
Eclipse is an useful integrated development environment tool for Android application development. Sometimes you will meet some strange error when using it, this section will show some typical problem and tell you how to resolve it.
When you use main menu Eclipse->preferences->Android, if you see “bad version number in .class file” error message. It is due to the Eclipse environment variable having the wrong Java run -time version number.
Go to Help->About->Installation Details, to check the Eclipse environment variable and set it correctly.
kprobe]
kprobe is the Linux* kernel debug tool, it can provide developer with printing kernel level debug log.
How to use kprobe kernel debug
Follow the below steps to print kernel level log to dmesg buffer pool.
Copy kprobes sample code into the Intel driver directory for building kernel module
cd ~/aosp/hardware/intel/linux-2.6/drivers/misc
cp –r /AOSP/hardware/intel/linux-2.6/samples/kprobes
Change the makefile to build the kprobe sample kernel module by typing the lines below in red text.
wang@~/r4_1_stable/hardware/intel/linux-2.6/drivers/misc >git diff
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 166a42e..6ef0f1d 100755
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -3,6 +3,7 @@
#
intel_fabric_logging-objs := intel_fw_logging.o intel_fabricerr_status.o
+obj-m += kprobes/
obj-$(CONFIG_IBM_ASM) += ibmasm/
obj-$(CONFIG_AD525X_DPOT) += ad525x_dpot.o
obj-$(CONFIG_AD525X_DPOT_I2C) += ad525x_dpot-i2c.o
diff --git a/samples/kprobes/Makefile b/samples/kprobes/Makefile
index 68739bc..8f253fc 100644
--- a/samples/kprobes/Makefile
+++ b/samples/kprobes/Makefile
@@ -1,5 +1,8 @@
# builds the kprobes example kernel modules;
# then to use one (as root): insmod <module_name.ko>
+CONFIG_SAMPLE_KPROBES=m
+CONFIG_SAMPLE_KRETPROBES=m
+
obj-$(CONFIG_SAMPLE_KPROBES) += kprobe_example.o jprobe_example.o
obj-$(CONFIG_SAMPLE_KRETPROBES) += kretprobe_example.o
Make bootimage to build the kprobe sample kernel module, then you can find it in:
re-flash phone images including boot.bin and system.img to make magic number consist between boot.bin and kprobe modules, otherwise you will fail to insert kprobe modules into kernel.
To find a kprobe kernel message in /proc/kmsg, type insmod kprobe_example.ko.
Performance Tools
Performance issues have always been a headache for developers. Fortunately, there are some tools to help us. Here we introduce Intel® Graphics Performance Analyzers (Intel® GPA), Systrace, Matrix, Wuwatch, SEP, and Kratos.
Intel GPA
GPA tool can be used to dump many useful information in device like : CPU frequency, FPS, memory usage, network usage, Opengl texture, etc.
4. Double tap intel-gpa_13.1_m64.deb, and complete the installation.
To use Intel GPA:
$ gpa-system-analyzer.
Image may be NSFW. Clik here to view.
Figure 2-1
Connect the target device using a USB connection, and Intel GPA will recognize the device. Click the “Connect” button to connect the device, and the Intel GPA screen like the one shown in Figure 2-2 will display.
Image may be NSFW. Clik here to view.
Figure 2-2
Image may be NSFW. Clik here to view.
Figure 2-3
To launch an app on device, click on the name of the app in Intel GPA. The monitored actions include: CPU, Device IO, GPU, Memory, Memory Bandwidth, OpenGL*.
Image may be NSFW. Clik here to view.
Figure 2-4
To analyze the results:
Figure 2-4 shows the actions that are being monitored in Intel GPA, including CPU 01 Frequency, CPU 02 Frequency, Disk Write, Target App CPU Load, and CPU 01 Load. The frequency of the CPU Core 1 is 2.0 Ghz, and the load of CPU Core 1 is 100%. With this tool, you can also find out if there are some exceptions with the CPU, GPU, etc.
Systrace [put a lead-in sentence as to what they can do in general with Systrace]
The systrace tool helps analyze the performance of your application by capturing and displaying execution times of your applications processes.
Google’s Systrace tool is supported in Android OS versions JellyBean and above. Use the following links to download the Systrace tool , which is in the SDK package.
NOTE: You can set the trace tags for systrace using your device's user interface by navigating to Settings->Developer options->Enable traces. Select the options you want from the list and click ok.
Profile android application
To get a systrace log of 10 seconds, do following command $ python sytrace.py –d –f –i –l –t 10 –o mysystracefile.html
-o <FILE>, specifies the file to write the HTML trace report to.
-t N, traces activity for N seconds. Default value is 5 seconds.
-l, traces the CPU load. This value is a percentage determined by the interactive CPU frequency governor.
-I, traces the CPU idle events.
-f, traces the CPU frequency changes. Only changes to the CPU frequency are logged, so the initial frequency of the CPU when tracing starts is not shown.
-d, traces the disk input and output activity. This option requires root access on the device.
Note: After executing the above command, you have 10 seconds to profile the current android application.
To check the profile results:
Opening mysystracefile.html, refer to figure 3-1. Using following keys to operate system trace diagram.
“w” key : Zoom into the trace timeline
“s” key: Zoom out of the trace timeline
“a” key: Pan left on the trace timeline
“d” key: Pan right on the trace timeline
Image may be NSFW. Clik here to view.
Figure 3-1
To analyze the results:
The time: 4520 ms~4820 ms,
The CPU frequency of the thread 6803(UnityMain) is about 800 Mhz.
The event marked in the black in Figure 3-2 takes about 18 ms. By comparing the value with other devices, you can find out if there is a difference when dealing with the same event.
The thread was running in different CPU cores, and switched at least twice: CPU core 1-> CPU core 2->CPU core 1… If a thread switches CPU cores frequently, it will affect the performance of device.
Matrix is a tool to measure power and performance (PnP) on Intel processor-based mobile platforms. The data capture methodology and information on the internal counters is Intel property and shouldn’t be distributed externally. Tool download link: http://mcgwiki.intel.com/wiki/?title=PnP_Matrix_Tool_Setup
Unzip the Matrix 3.4.3.zip, which contains three files:
Driver: For Android versions 4.0+, this is not useful so you can ignore it.
matrix: This is the tool we will push to the target device we are testing.
MatrixUserGuide-3.4.4: the User Guide for matrix.
To use push Matrix to the target device, type these commands:
$ adb root
$ adb remount
$ adb shell
# cd data
# mkdir Matrix
$ adb push <dir>/matrix /data/Matrix
Run matrix to get data from the platform Matrix should have time and at least one feature as a mandate Usage
./matrix –f <feature> –t <time-in-seconds>
Here ./matrix is the matrix tool, -f refers to the features, and -t is the time. The time duration is in terms of seconds. The minimum is 1 second and the maximum is 3600 seconds. In the above example -t 20 means 20 seconds. This will create a default output file by the name MatrixOutput.csv.
./matrix –f cstate –t 120 –o filename
This command will store the output into filename.csv (user specified name). Note: When using –o, only give the file name without any extensions. Matrix will automatically append .csv after post-processing.
Case study
One case to show how to use matrix to capture target device feature value
If you want to get all the supporting features of matrix, you can use command “./matrix –h ” to get them.
If you want to get multiple features at the same time, you can use a command like this:
you will get report document after this command to show capture result.
Wuwatch
Wuwatch is a command line tool for both tracing and monitoring system power states. It traces C-state (processor power), S-states (S0ix and S3 system states), D0ix (device or IP block) states, both user and kernel wakelocks, and P-state (processor frequency) activity. While tracing C-states, it attempts to determine the cause of every C-state wakeup, which is a transition to a higher power state.
There are 9 files in Matrix 3.4.3.zip, we need to pay attention to the two files of them.
Summary_data_v3_1.py : This is a Python* script that generates summary data from the wuwatch raw text trace output..
WakeUpWatchForAndroid : The Wuwatch User Guide.
Integration with Android Distributions.
The driver and binary file of wuwatch are now integrated into many Android distributions. Before using the tool to get raw data, you must do some initialization.
$ adb root
$ adb remount
$ adb shell
# cd /lib/modules/
# insmod apwr3_1.ko
# lsmod (Check the result. See Figure 5-1 for an example.)
Image may be NSFW. Clik here to view.
Figure 5-1
$ mkdir /data/wuwatch
$ cp /system/bin/wuwatch /data/wuwatch
$ cp /system/bin/wuwatch_config.txt /data/wuwatch (Check the result. See Figure 5-2 for an example.)
Image may be NSFW. Clik here to view.
Figure 5-2
Get the raw data from DUT.
Use the following steps to quickly collect C-state, P-state, and wakelock data for 60s on an Android- based system.
$adb root
$adb remount
$adb shell
#cd /data/wuwatch/
#./wuwatch –cs –ps –wl –t 60 –o ./results/test
#cd results (After 60s, check the results. An example is shown in Figure 5-3.)
Image may be NSFW. Clik here to view.
Figure 5-3
# exit
$adb pull /data/wuwatch/results/ <pc-local-dir>/ (Check the result. See Figure 5-4 for an example.)
Image may be NSFW. Clik here to view.
Figure 5-4
Summarize the results.
Before summarizing the results you must confirm that the Python27 has been installed on your PC (Windows* or Linux).
Copy summary_data_v3.1.py to the same directory with “test.txt” and “test.ww1.”.
--txt –o <local-dir>\test-summary.txt (Check the results.)
test-summary.txt.txt
SEP(Sampling Enabling Product)
Sampling Enabling Product (SEP) is a performance tool used for analyzing performance and tuning software on all Intel processor-based platforms. The SEP tool supports event-based sampling and counting through CPU performance counters. The tool consists of a collection portion (sep) and an output portion (sfdump5).
SEP collection overhead is extremely low (< 2% at default sampling frequencies).
NOTE: After the command executes, complete your operating on program needed to be profiled in 20 seconds. After 20 seconds of profiling, real_prof.tb6 file will be generated. If you want to get all the supporting features of SEP, use the command “./sep –help.”
The file real_prof.tb6 is sep profile result:
real_prof.tb6
Analyze the results:
(1). Use the SFDUMP5 tool to analyze SEP profile results.
# sfdump5 real _prof.tb6 –modules | less
If you want to get all the supporting features of sfdump5, you can use the “./sfdump5 ” command.
(2). Use the VTune™ Analyzer to analyze profile results.
Kratos is an Intel-developed tool that monitors Android application system resource utilization, broadcasts system messages (aka Android intents), and checks battery activity and platform thermals.
Kratos uses the collected data to measure power consumption of the entire device or estimate power consumption of different platform components, which are displayed with run-time and post-processed graphs and as averages or totals in a table. System broadcast messages are overlaid on the graphs to provide workload context, enabling to draw conclusions for a specific workload’s power consumption profile.
Kratos is integrated by default into the main (R4) userdebug and eng branches of the JB PSI Android build.
[How to use]
Launch Kratos from Android Launcher application
Click the button of “Start Manual Profiling”(see Figure 7-1 )
Image may be NSFW. Clik here to view.
Figure 7-1
Select the option that is need to Monitor at the table “DATA”(eg: Figure 7-2 )
Image may be NSFW. Clik here to view.
Figure 7-2
Set the duration of getting data(eg: Figure 7-3 )
Image may be NSFW. Clik here to view.
Figure 7-3
Click the Start button to get data from target device (eg: Figure 7-4 )
Image may be NSFW. Clik here to view.
Figure 7-4
Click the button of “Start” to get data from DUT
If you did not enter a value for “DURATION”, you must Stop Profiling manually by clicking the Stop Profiling button, shown in Figure 7-4.
When profiling stops either by the setting or manually, you must confirm the action and save the results by clicking Yes (see Figure7-5 and Figure7-6).
Image may be NSFW. Clik here to view.
Figure 7-5
Image may be NSFW. Clik here to view.
Figure 7-6
Click the Load Session button to load the data of testing as shown in Figure 7-7.
Image may be NSFW. Clik here to view.
Figure 7-7
Select the data that you want to analyze (see Figure 7-8), then click the Load button.
Image may be NSFW. Clik here to view.
Figure 7-8
Check the results in the graph, like the one shown in Figure 7-9
Image may be NSFW. Clik here to view.
Figure 7-9
Check the results with Stats as shown in Figure 7-10
Intel® Beacon Mountain è un potente strumento realizzato per tutti gli sviluppatori software che desiderano realizzare apps destinate a dispositivi mobili basati su sistema operativo Android. Questo tool permette, infatti, di installare in pochissimi semplici passi tutta la suite di strumenti necessari al fine di sviluppare nativamente applicazioni e di cross-compilarle sia per device basati su processore ARM*, che per device basati su architetture Intel® (e.s. Intel® Atom). Il tool Intel® Beacon Mountain è reperibile nella sezione "Strumenti e Download" dell'area dedicata ad Android sul portale di Intel® Software.
Intel® Beacon Mountain fornisce anche un ambiente di emulazione per effetturare il deploy e testare le performance ed il corretto funzionamento delle applicazioni su dispositivi di varia natura, con differenti risoluzioni dello schermo, varie versioni dell'OS Android, diversi processori e diverse quantità di memoria RAM intallata. Nelle prime fasi di installazioni del tool,.l'installer di Beacon Mountain testerà la possibilità di installare anche un componente chiamato Intel® HAXM (o, per esteso, Intel® Hardware Accelerated Execution Manager): questo software è un hardware-assisted virtualization engine (detto anche hypervisor) che utilizza la Intel® Virtualization Technology (Intel® VT) per velocizzare a livello hardware l'esecuzione dell'emulatore Android su una macchina host.
Su una stessa macchina però non possono convivere differenti hypervisor engine attivi. Per questo alcuni sviluppatori hanno trovato difficoltà nell'installare il componente Intel® HAXM di Intel® Beacon Mountain se sul loro pc era già stato installato ed abilitato un altro hypervisor, come ad esempio Microsoft* Hyper-V, strumento fondamentale per lo sviluppo, ad esempio, di applicazioni Windows* Phone.
É comunque importante sottolineare che l'installazione di Intel® Beacon Mountain può essere portata a termine anche senza necessariamente installare Intel® HAXM, viene solo notificato da parte dell'installer l'impossibilità di installare questo particolare componente, senza per questo precludere il portare a termine con successo l'installazione degli altri strumenti contenuti nella suite: questo però potrenbbe implicare una scarsa resa nelle performance di responsività dell'emulatore Android che Intel® Beacon Mountain andrà ad installare.
Ma allora com'è possibile risolvere il conflitto tra questi hypervisor, nel particolare tra Intel® HAXM e Microsoft* Hyper-V? Vediamo!
RISOLVERE IL CONFLITTO
Un modo semplice e veloce per risolvere il conflitto consiste in un'operazione semplice, ma molto al tempo stesso molto delicata. Infatti, sarà necessario intervenire con BCDEDIT per creare due differenti opzioni di boot di Windows* 8: la prima sarà la nostra opzione tradizionale che abiliterà di default (se presente/installato) Hyper-V, la seconda opzione ci consentirà di avviare Windows* 8 in una modalità in cui il demone Hyper-V sarà disabilitato, permettendoci di installare ed utilizzare in tutta tranquillità e senza conflitti tutta la suite di strumenti presenti in Intel® Beacon Mountain (e quindi anche l'accelerazione hardware regalataci da Intel® HAXM).
ATTENZIONE: un errore nella procedura descritta in seguito potrebbe costare il corretto avvio del vostro sistema operativo Windows. Nè Intel® nè l'autore della guida potranno essere ritenuti responsabili per eventuali malfunzionamenti dati dal sistema.BE CAREFUL!
Step 1.
Aprite un prompt dei comandi con privilegi di amministratore:
C:\>bcdedit /copy {current} /d "Windows 8.1 Senza Hyper-V"
E ci sarà restituito un messaggio del tipo (ad esempio):
The entry was successfully copied to {08e28906-0ab9-11e3-9b2f-402cf41953d5}.
In cui {08e28906-0ab9-11e3-9b2f-402cf41953d5} corrisponde ad un GUID univoco del vostro sistema.
Step 3.
Ora possiamo modificare l'entry secondaria di boot appena creata disabilitandone però Microsoft Hyper-V, digitando semplicemente:
bcdedit /set {vostro-GUID} hypervisorlaunchtype off
Continuando l'esempio fatto in precedenza con il GUID risultante il comando da digitare diventerebbe:
bcdedit /set {08e28906-0ab9-11e3-9b2f-402cf41953d5} hypervisorlaunchtype off
Step 4.
Ora possiamo chiudere il promt e riavviare il sistema.
Dopo la prima fase di boot, cioè il caricamento del bios, il Windowsw 8 boot manager ci offrirà due opzioni di boot: la prima è il nostro vecchio sistema Windows 8 con abilitato Hyper-V, la seconda voce sarà sempre il nostro vecchio sistema Windows 8, ma il servizio Hyper-V sarà inibito ad essere avviato.
Grazie mille al Microsoft* MVP (Virtual Machine Expert) Francesco Valerio Buccoli per la preziosa consulenza tecnica.
The earliest attempt I know of porting a 3D engine to a real phone was that of Superscape, back in the very early 2000s. They were working with a number of OEMs to try to make their Swerve engine run on an ARM7. Those phones’ CPUs ran at about 40 MHz and included no cache. The content they could run on those devices was a maximum of 40 polygons, flat-shaded, with no texture and no z-buffer. It was a challenge for any artist! By comparison, early smartphones like the Nokia 7650 were super-fast, with an ARM9 running at 100 MHz, and cache. But that was more than ten years ago.
The evolution of mobile platforms since then has been spectacular. The first 3D games on phones had very little in common with what we now see on Android devices. One of the triggers of this giant leap was certainly the integration of dedicated graphics hardware into mobile SoCs (System-on-Chip). Along with many other architecture improvements, it powered a huge boost in the triangle throughput capability, from a few hundreds to hundreds of thousands, and an increase of two orders of magnitude in the pixel count. This has more recently allowed developers to finally create console quality games for mobile devices.
Yet, game creators are hungry consumers of resources and have the bad habit of pushing the technology to its limits. That is why many challenges nowadays are very similar to those of the past. In many ways, mobile platforms are almost on par with the current generation of consoles, but they are still way behind modern gaming PCs, and they also have some particularities that one should know about before diving into developing mobile games.
Energy efficiency is still the main constraint that limits the overall processing power of mobile devices, and will continue to be so in the foreseeable future. Memory is also limited—although this has improved enormously in the past few years—and shared with other processes running in the background. Bandwidth is, as always, a very precious resource in a unified architecture and must be used wisely or it could lead to a dramatic drop in performance. In addition, the variety of devices, processing power, display sizes, input methods, and flavors in general is something that mobile developers have to deal with on a daily basis.
Here comes Anarchy!
At Havok we have been trying to make life a bit easier for Android developers by handling most of these challenges ourselves with Project Anarchy.
We have recently announced the release of this toolset made up of Havok’s Vision Engine, Physics, AI, and Animation Studio; components of which have been used to build multiple games like Modern Combat 4, Halo* 4, Skyrim*, Orcs Must Die, and Guild Wars 2 to name a few. Project Anarchy optimizes these technologies for mobile platforms, bundles them together along with exporters for Autodesk’s 3ds Max* and Maya* and a full WYSIWYG editor, and allows users to download a complete toolkit for development on iOS*, Android (ARM and x86), and Tizen*.
Image may be NSFW. Clik here to view. Figure 1. "A screenshot of the RPG demo included in Project Anarchy, is an example of content that runs on current Android platforms."
Vision goes mobile
As one would expect, the tool that required the most work to be ported to Android was our 3D game engine. The Vision Engine is a scalable and efficient multi-platform runtime technology, suited for all types of games, and capable of rendering complex scenes at smooth frame rates on PCs and consoles. Now the Vision Engine had to perform at similar standards on mobile platforms. And as important as that, we wanted to provide the same toolset as for any other platform, but streamlined specifically to address the challenges associated with development on mobile platforms.
Having worked with consoles such as Xbox 360*, PlayStation* 3, and PlayStation Vita*, we were already familiar with low memory environments, and we had optimized our engine and libraries for those kinds of constrained environments. But moving to mobile meant having to make other optimizations, and the specifics of mobile platforms required us to think of some new tricks to make things run nicely with limited resources. Several optimizations had to be made to reduce the number of drawcalls, the bandwidth usage, the shader complexity, etc.
A few rendering tricks
For example, additional rendering passes and translucency are expensive. That is why we had to simplify our dynamic lighting techniques. The optimization we used here was to collapse one dynamic light—the one that affects the scene the most and would thus have produced the highest overdraw—into one single pass with the static lights. As there is often one dominant dynamic light source in a scene, this greatly helped performance by reducing drawcall count and bandwidth requirements. In addition, we also offer vertex lighting as a cheap alternative, but pixel lighting will still be required for normal maps.
Vision also supports pre-baked local and global illumination, which is stored in lightmaps (for static geometry) and what we call a lightgrid (used for applying pre-computed lighting contributions to dynamic objects). In a lightgrid, you have a 3D grid laid out in the scene that stores the incoming light from six directions in each cell. On mobile devices, we can optionally use a simpler representation to improve performance. This representation only stores light from one primary direction along with an ambient value. The lighting results do not achieve the same visual fidelity, but they are usually good enough and very fast.
Image may be NSFW. Clik here to view. Figure 2. "The difference in the lighting results when using a normal lightgrid versus a simple lightgrid."
As mobile GPUs often have limited resources for complex arithmetic operations, evaluating exponential functions for specular lighting could also become a serious bottleneck in terms of frame rate. To avoid this, we pre-bake cubemaps in our scene editor that accumulate lighting information from all surrounding light sources. While diffuse lighting is computed as usual, we approximate specular highlights by sampling from the generated cubemap and adjusting the intensity to account for local occlusion. This allows us to approximate an arbitrary number of specular highlights at the cost of a single texture lookup, while still getting a very convincing effect.
Shadow mapping was another feature that needed some tweaking. Instead of using a deferred shadow mask as we do on PCs (i.e., performing the depth comparison in a full-screen post-processing pass and then using the resulting texture to modulate the dynamic lighting), we fetch the shadow map directly during the lighting pass to save memory bandwidth. Furthermore, as texture sampling is relatively expensive on mobile devices, we limited our shadow maps to a single sample comparison instead of percentage-closer filtering. As a result, the shadows have hard edges, which is generally acceptable if shadow casting is restricted to a relatively small area. We currently support shadow maps for directional and spot lights, but we chose not to support shadow maps for point lights on mobile platforms for now, as the tetrahedron shadow mapping technique we use on PCs and consoles would be prohibitively expensive. Shadow mapping on mobile is also recommended to be used only in small areas, and to have few objects casting shadows, like the players and maybe a few enemies for example.
We also spent some time in making volumetric effects (volumetric lights, fog volumes, sun shafts) run smoothly on mobile. These techniques typically require rendering multiple transparent passes, performing multiple texture sampling operations per pixel, or computing integrals—each of which is prohibitively expensive on mobiles. As a result, we ended up going down a different route. On mobile platforms, our volumes are actually made of a low-poly mesh consisting of a few layers, like an onion, which a shader will fade out as the camera approaches. The trick here consists of collapsing the geometry to lines as soon as the transparency is so low that you can’t actually see the geometry anymore. These degenerated triangles will not be rasterized and so the pixel fill-rate is significantly decreased and reasonable performance is achieved.
Image may be NSFW. Clik here to view. Figure 3. "An example of shadow maps and volumetric effects running on Android*"
Terrains also required some modifications for mobile. On PCs and consoles we use height-field based terrains with dynamic geometry mipmapping, along with runtime texture blending, and three-way mapping to avoid texture stretching on steep slopes. As a result, the vertex counts are relatively high, and the bandwidth requirements resulting from mixing multiple detail textures are substantial. To make Vision terrains work on mobile platforms, we allow generating optimized static meshes from heightmaps and baking down the textures into a single map per terrain sector. As a consequence, we can’t render truly huge worlds with runtime-modifiable terrain, but this limitation is typically acceptable on mobile.
Another convenient feature that we added to Vision to improve performance of pixel-heavy scenes on devices with very high resolution displays is an option for upscaling. This is done by rendering the scene into a low resolution off-screen target and upscaling it to the display resolution in a separate step. On the other hand, to maintain high visual quality, UI elements and text are still rendered at the display full resolution. This works quite well on devices with resolutions higher than 300 dpi, and can yield substantial performance gains.
Shader authoring considering mobile GPU oddities
All our existing shaders in the Vision Engine are written in HLSL. So, the first obvious problem when targeting OpenGL* ES platforms is that shaders require GLSL. To make cross-platform development as easy as possible, we designed a system in which shaders only need to be written once, in HLSL/Cg, and they are automatically translated to GLSL by vForge, our scene editor, when they are compiled.
The second concern when writing shaders for mobile is how different the hardware architecture is from other more traditional platforms. For a start, to save space and power, all mobile SoCs have unified memory. System RAM is shared between the CPU and GPU; it is limited, and typically slower. Therefore, our aim is to avoid touching RAM as much as possible. For example, minimizing the vertex size and the number of texture fetches is generally a good idea.
Another big difference is that most mobile GPUs, such as the PowerVR* GPUs used in Intel® Atom™ systems, use tile-based deferred rendering. The GPU divides the framebuffer into tiles (16x16, 32x32), defers the rendering until the end, and then processes all drawcalls for each tile—one tile fits entirely inside one GPU core. This technique is very efficient because pixel values are computed using on-chip memory, requiring less memory bandwidth and less power than traditional rendering techniques, which is ideal for mobile devices. , An additional benefit of this approach is that, as it just involves comparing some GPU registers, depth and stencil testing is very cheap. Also, as only the resolved data is copied to RAM, there is no bandwidth cost for alpha blending, and MSAA is cheap and uses less memory.
In tile-based architecture, color/depth/stencil buffers are copied from RAM to tile memory at the beginning of the scene (restore) and copied back to RAM at the end of the scene (resolve). These buffers are kept in memory so that their contents can be used again in the future. In many applications, these buffers are cleared at the start of the rendering process. If so, the effort to load or store them is wasted. That is why in Vision we use the EXT_discard_framebuffers extension to discard buffer contents that will not be used in subsequent operations. For the same reason, it is also a good idea to minimize switching between render targets.
We also want to avoid dependent texture reads in the pixel shader, as they make texture prefetching useless. When dependent texture reads are performed by the shader execution units, the thread will be suspended and a new texture fetch task will be issued. To prevent this, we do not do any mathematical operations on texture coordinates in the pixel shader.
Dynamic branching in our shaders is also something that we want to avoid, as it causes a pipeline flush that ruins performance. Our solution for this is a shader provider that will select the particular shader permutation for a specific material depending on its properties and thus avoid branching. Also, to reduce the runtime memory consumption we store these shaders in a compressed format and only decompress them when they are actually needed.
It is also important to take into account the precision used in mathematical operations in shaders, as reducing the precision can substantially improve performance. Therefore, it is recommended to always use the minimum acceptable precision to achieve any particular effect.
Image may be NSFW. Clik here to view. Figure 4. "An example of usage of lightweight mobile shaders in Vision: a glowing emissive texture and a specular cubemap that gives a shiny effect to the rocks."
These are just general optimizations that should work on all Android platforms, but keep in mind that every device and every GPU has its oddities. So, a good piece of advice would be to always read the vendor-specific developer guidelines before targeting any platform.
A Lifetime headache
With incoming calls and messages and a thousand different events popping up at the most inappropriate time, application lifetime management on Android devices becomes a serious matter. The operating system can require applications to free up resources, for instance, when another application is launched and requires system resources. Similarly, the operating system can require your application to terminate at any time.
In Vision we handle unloading and restoring graphics resources (textures, GPU buffers, shaders) when the mobile app goes to the background. This is mandatory for Android because all OpenGL ES handles are invalidated as soon as the app goes to the background, but on other platforms it is also generally a good idea to free some memory to reduce the risk of the app being terminated by the operating system due to a low memory situation.
Also on Android, handling the OS events can be a tricky job, because the order in which they happen is not the same for different devices and/or manufacturers. So this requires implementing a robust internal state handler that depends on the exact order of events as little as possible. This means monitoring the running state of an app, checking if it has a window handle, and whether it is focused.
Image may be NSFW. Clik here to view. Figure 5. "Application lifetime management on Android devices becomes a serious matter."
Havok Physics, AI, and Animation Studio
The other products included in Project Anarchy—Havok Physics, AI, and Animation Studio—do not have any graphical parts in them. So, when we ported them to mobile, it was purely about CPU and memory optimization.
We already supported Linux*-based systems, so when we started on mobile, and since they have broadly similar compilers and system APIs to Linux environments, getting the code to work was relatively straightforward. The main effort after that was to make them fast. We worked closely with Intel to make sure our code was optimized for Intel® Streaming SIMD Extensions (Intel® SSE). The compiler can make a large difference in some areas of code, and we see on-going increases in performance from newer compiler revisions as the platform SDKs mature.
The second prong of attack was multithreading. Given that most mobile CPUs are now multicore, we took our code, already well optimized for multithreaded environments on PCs and consoles, and thoroughly profiled it on mobile platforms to ensure that it was efficiently multithreaded on our target systems.
Finally, we had to make sure our code stayed cache efficient, given that memory speeds on mobile are relatively low. This is not a problem specific to mobile, so our existing optimizations to reduce cache misses ported over well.
From painful to painless workflow
The development workflow on mobile platforms has always been known to be somehow painful, especially when developing for multiple platforms and having to care about porting assets to different formats to match the requirements on each device (i.e., different texture sizes, file formats, compression methods). On top of this, files are usually required to be bundled together with the application package, which means that for each asset change—textures, sounds, models—the package has to be rebuilt and uploaded to the device. For larger projects the build time of the packages, and the upload and install times, can become prohibitively long and slow down development due to lengthy iteration cycles.
Image may be NSFW. Clik here to view. Figure 6. "Screenshot of the RPG demo content in the scene editor vForge during development"
Managing and previewing assets
To make this process easier and faster, we decided to implement a few custom tools. The first one is an asset management system that has an easy to use asset browser integrated with our scene editor vForge. The asset management system provides automatic asset transformation capabilities and can convert textures from their source format (i.e., PNG, TGA) to a platform-specific format (i.e., DDS, PVR, ETC). As a result, developers do not have to think about which texture formats are supported on which platform. The actual conversion is automatically performed in vForge, but developers can also configure each asset individually to allow precise tweaking if needed, or even hook in their own external tool to do custom transformations on any type of asset (i.e., reducing the number of vertices of models).
We also added a material template editor in vForge that allows specifying platform-dependent shader assignments. This makes it possible to have different shaders, optimized for different platforms, configure them once and use them on every material that should use the same configuration.
All scenes can be previewed in vForge using device-specific resources and shaders instead of the source assets, thus allowing the artists to quickly see how the scene will look on the target device without having to deploy it.
Image may be NSFW. Clik here to view. Figure 7. "The asset management system includes an easy to use asset browser integrated with the scene editor, with automatic asset transformation capabilities."
The magically mutating assets
The second tool we implemented to enable faster turnaround times is an HTTP-based file serving system that allows an application running on a mobile device to stream in data from a host PC. This is extremely useful during development cycles because—together with the vForge preview—it completely removes the need for re-packaging and re-deploying the application every time an asset is modified.
Behind the scenes, the file server will cache downloaded files on the device and only re-download them when they have changed on the host PC, allowing for very fast iteration times, as only changed scenes, textures, etc. are transferred. In most cases it isn't even necessary to restart the application on the device to update resources, as almost all resource types can be dynamically reloaded inside a running application.
As a side effect, creating and deploying application packages is usually much faster when using this tool, as packages will only have to contain the compiled executable code—even scripts can be transferred over the file server connection. This allows for much faster iteration times, given that executables are typically very small in comparison with the associated scene data.
Handling the input remotely
Another tool we created to shorten turnaround times is what we’ve called “Remote Input.” It is actually a very simple idea, consisting of an HTML5-based web app that forwards inputs from a mobile device to the game running on a PC. Touch input events, as well as device acceleration and orientation data, are simply forwarded from the web browser on your mobile to the PC version of your application, or even to a scene running inside vForge. It can be used to rapidly prototype and test multi-touch input in your game without having to deploy it to a mobile device.
OpenGL ES 3.0 and the future
Some of the limitations in the techniques explained in this article may not be necessary in the near future. As smartphones and tablets get more and more powerful, the restrictions will be relaxed. But game features will advance and continue to push mobile hardware to its limits, as they have been doing for the past fifteen years.
New devices will offer more CPU and GPU cores, making it even more necessary to exploit the wonders of multithreaded computing. Longer term, we will probably get closer in performance and capabilities to current generation PCs, but there will still be some gotchas and caveats to watch out for on mobile, like the limited memory bandwidth.
The new APIs that are right there on your doorstep also offer a broad fan of new, exciting, and challenging possibilities. We already have a few devices out in the wild with cores and drivers fully conformant with OpenGL ES 3.0 (supported from Android 4.3 Jelly Bean). Some of the new features include occlusion queries (already in use on PCs and consoles), transform feedback (enabling features like GPU skinning with very high bone counts), instancing (extremely useful to reduce drawcall count and therefore CPU load), multiple render targets (to facilitate deferred rendering and post-processing effects), a bunch of new texture formats, and many other cool features. On the other hand, we will also be able to start moving some of the computational work over to the GPU thanks to OpenCL*, which is just emerging on mobile. We already have full GPU-driven physics simulations on the PlayStation 4, but this is an open R&D area for us in the mobile arena and will certainly be very exciting to explore.
About the author
Carla is a Developer Relations Engineer at Havok, responsible for helping developers to make better games with the Vision Engine. She has been working in the mobile 3D graphics arena since 2004. She started at RTZ interactive, a small company in Barcelona, developing 3D games for Java and Brew phones. A few years later, she moved over to developing games for the iPhone. Prior to joining Havok, she spent a couple of years at ARM working on the OpenGL ES drivers for the Mali-T600 series of GPUs.
Le Projet Anarchy d'Havok est un moteur de jeu mobile gratuit pour iOS, Android (X86 aussi) et Tizen. Il comprend le Vision Engine de Havok avec Havok Physics, Havok Animation Studio et Havok AI. Il dispose d'une architecture C++ extensible, un rendu optimisé pour les mobiles, un système flexible de gestion des assets et le codage et débogage pour Lua.
Plusieurs échantillons complets de jeux sont inclus dans le SDK. Plusieurs tutoriels sont mis en ligne sur le site officiel du Project Anarchy afin de faciliter aux développeurs l'utilisation de ce moteur.
Ce dernier qui vous permet de publier gratuitement vos applications sur differents Systémes d’exploitation est utilisé dans des jeux que vous connaissez forcément: The Elder Scrolls®, Halo®, Assassin’s Creed®, Uncharted™ and Skylanders™.
Ce qu’il faut retenir:
Architecture C++ extensible basée sur des plugins
Échantillons de jeux complets avec leurs codes sources
Large communauté: Discussions sur les forums, Mises à jour, Formations,…
AUCUNE restriction commerciale, GRATUIT pour publier des jeux pour iOS, Android et Tizen
Inclus FMOD, la librairie audio utilisée dans les jeux et les applications pour la gestion du son
Pour plus de détails sur les outils fournis, consultez cet article traduit en français sur Développez:
Le driver USB d'Intel pour les terminaux Android vous permet de connecter votre machine basee Windows* a votre terminal Android contenant un Processeur Intel Atom.
Remarque: La version 1.1.5 du driver est concue pour les développeurs d'applications pour Android utilisant Microsoft Windows* 8. Pour le support du terminal contactez le fabricant de votre appareil. Vous trouverez ci dessous le lien pour télécharger la version 1.1.4:
Microsoft Windows* Windows 8 (32/64-bit), Windows 7 (32/64-bit), Windows Vista (32/64-bit), Windows XP (32-bit only)
There are many third-party vendor applications available on the Android market, making them runnable on mobile platform is very important for the success of mobile platform on the market. The problem is that there is no source code available for these third-party vendor applications, and they do not run well on some mobile platforms sometimes, how can we identify the issue?
Debug Java application
There are many tools available for debug android java application, they can help developer to debug android application quickly and easily .
Below is a list of Android debug tools and links for how to download them:
baksmali:Analyze odex / dex file into smali file., You need to put files under /system/framework in the same working directory.
With smali code being so difficult to write, how can we write smali code quickly?
The answer is Eclipse. You can create an Android project in Eclipse, then use Java code to write what the code you want, then create apk file with dex. Finally you can get smali code for your function by apktool and paste the smali code in another place for usage.
For example: Log.x API in Java code. You can decompile it into smali, then paste it into application smali code for debugging message print.
You need to resolve run-time errors after debug and changing your smali code. To get smali code run-time errors, you can use the following commands:
adb logcat | grep dalvikvm
adb logcat | grep VFY
VFY information will show the smali error file, the routine and error causes. Dalvikvm information will show call stacks, context , etc.
Typical runtime errors:
The variable list isn’t consistent with declaration:
2. The routine call type is incorrect
For example: Use invoke-virtual for public/package routine. Use invoke-director for private routine.
3. apk is not signed properly
4. Use adb logcat | grep mismatch to check which package signature is incorrect
Smali Debug Troubleshooting
After changing the smali code, you need to package it by running “apktool b.” Typically, there will be some error messages, such as the following:
res/values/styles.xml:166: No resource found that matches the given name '@*android:style/Widget.Button error. You need to change styles.xml 166 line '@*android:style/Widget.Button into @android:style/Widget.Button
Many error lines like :\apktool\apk\res\values\public.xml:3847: error: Public symbol xxxxx The declaration is not defined.
All these errors are actually due to the first error line:
res/values/strings.xml:242: error: Multiple substitutions specified in non-positional format. Did you mean to add the formatted="false" attribute? string.xml
Check line 242 in the strings.xml file to find problem string and fix it.
Function calls (invoke-virtual etc. instructions) only can use v0~v15 register as parameter,there will be errors if using v16, etc. There are two ways to fix this:
Use invoke-virtual/range {p1 .. p1} instruction
Add move-object/from16 v0, v18 instruction
pN is an equal variable number + N. For example, “.local” is declared as 16,You can use register v0~v15. p0 is equal to v16,and p1 is equal to v17.
Jump label conflict
You will get this message if there are two of the same jump labels. For example, cond_11 will make the compile fail. You can change them to different names to resolve the conflict like ABCD_XXXX.
Use no-definition variable
The variable can be used as declared in “.local” instruction. For example, .local 30 shows this routine can only use v0 ~ v29, if use v39, there will be error.
Debug x86 native library of application
For example, let’s say you have an apk with x86 native library libcmplayer_14.so. When the application runs on an Intel processor-based platform, there is a tombstone showing a crash in libcmplayer_14.so. The following paragraphs show how to deduce potential problems by checking the libcmplayer_14.so API call to Intel processor-based platform.
Step1: Use readelf to check the Intel processor-based platform libraries, which will be used by libcmplayer_14.so. This will give you some sense about to which component the issue may be related.
readelf -d libcmplayer_14.so
Dynamic section at offset 0xd8b8 contains 33 entries:
Tag
Type
Name/Value
0x00000001
(NEEDED)
Shared library: [libdl.so]
0x00000001
(NEEDED)
Shared library: [liblog.so]
0x00000001
(NEEDED)
Shared library: [libz.so]
0x00000001
(NEEDED)
Shared library: [libui.so]
0x00000001
(NEEDED)
Shared library: [libmedia.so]
0x00000001
(NEEDED)
Shared library: [libbinder.so]
0x00000001
(NEEDED)
Shared library: [libutils.so]
0x00000001
(NEEDED)
Shared library: [libstdc++.so]
0x00000001
(NEEDED)
Shared library: [libgui.so]
0x00000001
(NEEDED)
Shared library: [libandroid.so]
0x00000001
(NEEDED)
Shared library: [libsurfaceflinger_client.so]
0x00000001
(NEEDED)
Shared library: [libm.so]
0x00000001
(NEEDED)
Shared library: [libc.so]
0x0000000e
(SONAME)
Library soname: [libcmplayer.so]
0x00000010
(SYMBOLIC)
0x0
0x00000019
(INIT_ARRAY)
0xe89c
0x0000001b
(INIT_ARRAYSZ)
16 (bytes)
0x0000001a
(FINI_ARRAY)
0xe8ac
0x0000001c
(FINI_ARRAYSZ)
12 (bytes)
0x00000004
(HASH)
0xd4
0x00000005
(STRTAB)
0x8f0
0x00000006
(SYMTAB)
0x350
0x0000000a
(STRSZ)
2409 (bytes)
0x0000000b
(SYMENT)
16 (bytes)
0x00000003
(PLTGOT)
0xe9f4
0x00000002
(PLTRELSZ)
496 (bytes)
0x00000014
(PLTREL)
REL
0x00000017
(JMPREL)
0x12a4
0x00000011
(REL)
0x125c
0x00000012
(RELSZ)
72 (bytes)
0x00000013
(RELENT)
8 (bytes)
0x6ffffffa
(RELCOUNT)
6
0x00000000
(NULL)
0x0
Step2: Use the objdump tool to find the UND API call used by libcmplayer_14.so, which you can find API name that the libcmplayer_14.so may use and could be potential problem call.
Step3: Add a log or use another debug method to the libraries and API on the Intel processor-based platform used by libcmplayer_14.so to find the crash position.
Now you can use any of the gdb commands to analyze the core dump file. For example, use the “bt” command to get call back trace.
objdump check native library
Same example as above:
Get the code base related to the bug
Build the code base for the bug
cd <aosp>/out/target/product/<platform>/symbols/system/lib. There will be a symbol for lib here
objdump -d libjni_latinime.so > tmp.log to decompile the lib
Open tmp.log to find eip 880b. You can find the error code position for the decompiled routine name in tmp.log
For example, you can use c++filt to turn _ZN8latinimeL30latinime_BinaryDictionary_openEP7_JNIEnvP8_jobjectP8_jstringxxiiii into a readable function name like the following:
/storage/sdcard0/Android/data/com.gameloft.android.ANMP.GloftBPHM.ML <-/storage The data in this directory is mostly likely created by the application itself when launched.
x86 Debug Case
Antutu tombstone issue [BZ 107342]
Run the 3d bench test after DUT encrypted. A tombstone will happen. The x86 emulator also has the problem, so the app should have responsibility for the issue.
Run cat /logs/his* to find the tombstone core dump file. Copy Antutu library 3drating.5 and libabenchmark.so into out/target/platform/redhookbay/symbols/system/lib
Use the “GDB check core dump” method to open the core dump file. Use the bt command to show the call stack as follows:
#0 0x5e361ef5 in native_window_set_buffers_format (format=4, window=0x0) at system/core/include/system/window.h:749
#1 ANativeWindow_setBuffersGeometry (window=0x0, width=0, height=0, format=4) at frameworks/base/native/android/native_window.cpp:63
#2 0x60f4be20 in Ogre::AndroidEGLWindow::_createInternalResources(ANativeWindow*, AConfiguration*) () from /home/zwang/r4_2_stable/out/target/product/redhookbay/symbols/system/lib/3drating.5
#7 0x60ecaf05 in OgreAndroidBaseFramework::initRenderWindow(unsigned int, unsigned int, unsigned int) () from /home/zwang/r4_2_stable/out/target/product/redhookbay/symbols/system/lib/3drating.5
#8 0x60ec2b71 in ogre3d_initWindow () from /home/zwang/r4_2_stable/out/target/product/redhookbay/symbols/system/lib/3drating.5
#9 0x5f09e271 in Java_com_antutu_ABenchMark_Test3D_OgreActivity_initWindow () from /home/zwang/r4_2_stable/out/target/product/redhookbay/symbols/system/lib/libabenchmark.so
#10 0x40dce170 in dvmPlatformInvoke () at dalvik/vm/arch/x86/Call386ABI.S:128
#11 0x40e27a68 in dvmCallJNIMethod (args=0x57941df8, pResult=0x8000cf10, method=0x57c11e10, self=0x8000cf00) at dalvik/vm/Jni.cpp:1174
#12 0x40df197b in dvmCheckCallJNIMethod (args=0x57941df8, pResult=0x8000cf10, method=0x57c11e10, self=0x8000cf00) at dalvik/vm/CheckJni.cpp:145
#13 0x40e2da5d in dvmResolveNativeMethod (args=0x57941df8, pResult=0x8000cf10, method=0x57c11e10, self=0x8000cf00) at dalvik/vm/Native.cpp:135
#14 0x40f2ec8d in common_invokeMethodNoRange () from /home/zwang/r4_2_stable/out/target/product/redhookbay/symbols/system/lib/libdvm.so
#15 0x57941df8 in ?? ()
#16 0x40de1626 in dvmMterpStd (self=0x8000cf00) at dalvik/vm/mterp/Mterp.cpp:105
#17 0x40ddefc4 in dvmInterpret (self=0x8000cf00, method=0x579e0b68, pResult=0xbffff604) at dalvik/vm/interp/Interp.cpp:1954
#18 0x40e590ec in dvmInvokeMethod (obj=0x0, method=0x579e0b68, argList=0x4207c260, params=0x4207c170, returnType=0x417e42d0, noAccessCheck=false) at dalvik/vm/interp/Stack.cpp:737
#19 0x40e6cf67 in Dalvik_java_lang_reflect_Method_invokeNative (args=0x57941f00, pResult=0x8000cf10) at dalvik/vm/native/java_lang_reflect_Method.cpp:101
#20 0x40f2ec8d in common_invokeMethodNoRange () from /home/zwang/r4_2_stable/out/target/product/redhookbay/symbols/system/lib/libdvm.so
#21 0x57941f00 in ?? ()
#22 0x40de1626 in dvmMterpStd (self=0x8000cf00) at dalvik/vm/mterp/Mterp.cpp:105
#23 0x40ddefc4 in dvmInterpret (self=0x8000cf00, method=0x579d63c0, pResult=0xbffff8c8) at dalvik/vm/interp/Interp.cpp:1954
#24 0x40e57e1c in dvmCallMethodV (self=0x8000cf00, method=0x579d63c0, obj=0x0, fromJni=true, pResult=0xbffff8c8, args=<optimized out>) at dalvik/vm/interp/Stack.cpp:526
#25 0x40e1ba6e in CallStaticVoidMethodV (env=0x8000a020, jclazz=0x1d400015, methodID=0x579d63c0, args=0xbffff97c "\t") at dalvik/vm/Jni.cpp:2111
#26 0x40dfb440 in Check_CallStaticVoidMethodV (env=0x8000a020, clazz=0x1d400015, methodID=0x579d63c0, args=0xbffff97c "\t") at dalvik/vm/CheckJni.cpp:1679
#27 0x402685ba in _JNIEnv::CallStaticVoidMethod (this=0x8000a020, clazz=0x1d400015, methodID=0x579d63c0) at libnativehelper/include/nativehelper/jni.h:793
#28 0x40269e71 in android::AndroidRuntime::start (this=0xbffffa50, className=0x80001208 "com.android.internal.os.ZygoteInit", options=<optimized out>) at frameworks/base/core/jni/AndroidRuntime.cpp:1005
#29 0x80000fd0 in main (argc=4, argv=0xbffffaf8) at frameworks/base/cmds/app_process/app_main.cpp:190
Debug from framework API, which is found:
android_view_Surface_getNativeWindow: get surface = 0x802fbbc8 return a valid ANativeWindow*
ANativeWindow_setBuffersGeometry. The input parameter is ANativeWindow* = null
So there should be something happening between the above two APIs that result in the tombstone happening.
Antutu uses ORGE render engine in the call stack. We can find the ORGE render engine source code at this link:
It is found ANativeWindow* point value is changed in void AndroidEGLWindow::create as following. This results in the mWindow being a NULL value and a tombstone happening.
On Intel processor-based platforms, memory points like ANativeWindow* will be higher than 0x80000000, (ARM platforms will lower than 0x80000000 ), which results in unsigned int -> int -> string -> int change problem. This problem won’t happen on ARM platforms.
In Antutu issue, the ANativeWindow* stored in opt->second should be used as unsigned int, parseInte will result to return 0 and Antutu tombstone issue.
App Issues—Highlights
Application Usage Pre-condition
Error Symptom:
When many applications appear to fail, they haven’t really failed. It is due to some usage pre-condition that is ignored by the tester or end user and later causes failures. Some typical pre-conditions are as follows:
sim card location difference (sim cards in the U.S., China, France have different 3G locations)
Wi-Fi*/3G Internet connection differences. Some apps have specific locations or requirements for Wi-Fi/3G connection.
Poor Wi-Fi/3G connection environment. Poor connections will result in some apps re-trying the Internet connection many times, which results in poor power consumption and crashes.
Screen resolution/dpi constriction. Many applications have requirements for on screen resolution/dpi to work.
GMS services. Google play service, Google service framework, etc. turn off. Many GMS apps depend on underlying GMS services to work.
Solution:
Make sure you use apps that meet all of the above pre-conditions.
Fail to install app
Error Symptom:
Applications downloaded from Play store fail to install to device, or fail to install apk by adb install.
Solution:
Make sure usb/sd card write is ok. Make sure houdini hook in PMS works well.
App has hard code dependence on ARM abi/arch property, etc.
Error Symptom:
Dalvik cannot resolve the link for the ARM library or cannot find the native method implementation because it fails to copy the native library into the device when app is run.
Major app function check fail, etc.
Solution:
Remove app hard code check by changing its smali code or ask app ISV to fix.
App has dependence on OEM framework (like Samsung changes its framework)
Error Symptom:
Fail to find some field, method, or class.
Solution:
Change smali code to mask related field/method/class usage.
Copy related framework class into your device.
App has some dependence on native library which is missing on Intel processor- based platform
Error Symptom:
UnSatisfiedException exception with missed native library name.
Solution:
Check app library dependence and copy related library into device.
App doesn’t have permission
Error Symptom:
no permission
Solution:
Add <user-permision /> into AndroidMenifest.xml
Data base structure difference
Error Symptom:
Miss some field or type mis-match
Solution:
Decompile apk, check smali code to find SQL-related string (ex: create table … ), edit SQL string to fit data base, and package back into apk.
App has dependence on ISV function package like <uses-library android:name="xxx_feature" /> in AndroidManifest.xml
Error Symptom:
no-permission to access some feature when launch the app
Solution:
Change AndroidManifest.xml to remove these <uses-library> tage or copy these feature jar package from other mobile platform into your target device to have a try.
com.google.android.vending.licensing.LicenseValidator.verify Issue related with paid app
If you see this issue, you can repo it using the following steps:
E/AndroidRuntime(27088): at com.google.android.vending.licensing.LicenseValidator.verify(LicenseValidator.java:99)
E/AndroidRuntime(27088): at com.google.android.vending.licensing.LicenseChecker$ResultListener$2.run(LicenseChecker.java:228)
Repo Steps:
(a) Install apk
(b) Remove Google account on the device
(c) Launch this app
How to add resources into the apk file
Example: to add one string resource, we need to add into values/strings.xml
<string name="newstring">content</string>
public.xml under values directory record all resource ids. Find the latest <public type="string" ...> element, then add <public type="string" name="newstring" id="0x7f0700a0" />
Change smali file to use the new string resource as follows:
Google standard DPI: 320, 240 (HDPI), 160 (MDPI), and possibly 120 (LDPI)
If your device is another DPI, many of the applications in the Play store will be unavailable.
Device Feature Filter
Google Play Store will call the API PackageManager.getSystemAvailableFeatures and hasSystemFeature to get all the features on the device.
It will filter applications according to the features available on device.
Take the example of MFLD PR phone JB 4.1
If camera-related features are not available in the return value in API PackageManager.getSystemAvailableFeatures and hasSystemFeature, MFLD PR phone won’t find needed camera features like: smart compass, wechat, etc.
Image may be NSFW. Clik here to view.
If camera-related features are available, in the return value in API PackageManager.getSystemAvailableFeatures and hasSystemFeature, MFLD PR phone can find needed camera features like: smart compass, wechat, etc.
Notes:
You need to clear data and cache of Google Play Service and Google Play Store to make the feature change take effect on the target device.
Image may be NSFW. Clik here to view.Image may be NSFW. Clik here to view.
Features available on MFLD PR phone are as follows:
Ce document vous guidera pour l'installation du package du pilote USB d'Intel® pour Android afin de connecter votre appareil Windows * à votre appareil Android ™ contenant un processeur Intel Atom.
Configurations requises:
Configurations matérielles requises: Appareil mobile Android avec un processeur Intel Atom Z2460. Un câble micro-USB/USB (Le même que vous utilisez pour charger votre appareil)
Les systèmes d'exploitation hôtes supportés:
Windows 7 (32/64-bit) Windows Vista (32/64-bit) Windows XP (32-bit seulement)
Les versions d'Android prises en charge: Android 2.3.7 – Gingerbread (GB) Android 4.0.x – Ice Cream Sandwich (ICS)
L'installation
Attention: Ne connectez pas votre appareil Android a votre ordinateur durant l'installation.
Vous verrez l'écran suivant. Cliquer sur "Next". (Si l'installateur détecte une ancienne version il demandera votre accord pour la désinstaller.)
Image may be NSFW. Clik here to view.
Vous verrez l'écran suivant. Lisez et acceptez l'accord de la licence du pilote Intel Android.
Image may be NSFW. Clik here to view.
Vous serez invité à sélectionner les composants comme on le voit sur l'écran ci-dessous. Cliquez sur le bouton "Next" pour continuer.
Image may be NSFW. Clik here to view.
Choisissez l'emplacement pour l'installation et cliquez sur "Install".
Image may be NSFW. Clik here to view.
Le programme procédera à l'installation des pilotes USB Android. Cela peut prendre quelques minutes.
Image may be NSFW. Clik here to view.
A la fin de l'installation du pilote, cliquez sur OK dans la fenetre qui apparait, puis cliquez sur "Finish" pour fermer le programme d'installation.
Image may be NSFW. Clik here to view.
Les profiles USB pris en charge
Après avoir installé les pilotes USB, branchez votre appareil Android à votre ordinateur en utilisant votre câble micro-USB/USB.
Ci dessous les profils USB pris en charge par le package du pilote USB d'Intel pour Android:
ADB (Android Debug Bridge): C'est l'interface de débogage d'Android. Il est obligatoire d'ISE pour le re-flash et le débogage. Pour l'activation et la désactivation:
ICS: Settings > Developer options > USB Debugging
GB: Settings > Applications > Development > USB Debugging
MTP (Media Transfer Protocol): C'est le protocole Windows pour faciliter le transfert des fichiers multimédias. Pour l'activation et la désactivation:
ICS: Settings > Storage > Click context menu > USB Computer connection > MTP
PTP (Picture Transfer Protocol): C'est le protocole Windows pour permettre le transfert d'images à partir des caméras numériques vers les ordinateurs. Pour l'activation et la désactivation:
ICS: Settings > Storage > Click context menu > USB Computer connection > PTP
RNDIS: Cette fonctionnalité fournit un lien Ethernet virtuel en utilisant le réseau téléphonique. Pour l'activation et la désactivation:
ICS: Settings > More… > Tethering and portable hostpot > USB tethering
GB: Settings > Wireless & Network > Tethering and portable hostpot > USB tethering
CDC Serial (Modem AT Proxy): Cette fonctionnalité permet la liaison au modem pour permettre d'utiliser les commandes AT via un port COM virtuel:
Si l'appareil n'est pas reconnu comme un périphérique Android, essayez les étapes suivantes pour résoudre votre problème.
Vérifiez le Gestionnaire de périphériques de Windows Ouvrez le Gestionnaire de périphériques de Windows:
Vous pouvez voir USB Mass Storage au lieu des périphériques ADB. Si c'est le cas, faites un clic droit sur l'icône du périphérique de stockage de masse et sélectionnez "Désinstaller". Puis débrancher votre appareil de l'ordinateur et refaites l'installation.
Vous pouvez voir des périphériques Android avec l'icône d'avertissement de couleur jaune. Si c'est le cas, faites un clic droit sur l'icône Android Device et sélectionnez "Désinstaller". Puis débrancher votre appareil de l'ordinateur et refaites l'installation.
Activer le débogage USB Assurez-vous que vous avez activé le débogage USB de l'appareil que vous tentez de connecter.
Dans les appareils Android 2.3 (Gingerbread) l'option est sous Settings > Applications > Development.
Dans les appareils Android 4.0.x (Ice Cream Sandwich) l'option est sous Settings > Development Options.
Further Questions & Support Si vous avez d'autres problèmes, n'hésitez pas à les mentionner ci dessous (Commentaire)
The technology industry is undergoing an amazing time of creativity and change. The world hasn’t seen the likes of this in years, maybe decades, and the pace of change is only accelerating.
Mobile is everything and everywhere. Consumers want the internet and computing capability with them at all times and places. The advent of smartphones, tablets and Ultrabooks means that every device is now thinner, lighter and with longer battery life. New tablets with incredible performance are on the horizon and the marriage of Ultrabooks and tablets can be seen in exciting new 2 in 1 devices offering the best of both worlds with both laptop and tablet capabilities.
Over the years, the Intel Developer Forum (IDF) has emerged as one of the key industry events reflecting and defining where technology is headed. This year's IDF is no different, reflecting the trend to mobility. The event, Sept. 10 to 12 in San Francisco, offers least two significant pieces of news to watch.
First, with Intel’s recent leadership transition now complete, the company’s new CEO, Brian Krzanich, and new president, Renee James, are well underway resetting the course of the company – with a clear emphasis on mobile computing leadership.
IDF marks the first major speeches by Brian and Renee in their new roles. They’ll set the tone for the conference – delivering the opening keynote on the morning of September 10. Brian and Renee will discuss the path they have set for the company and how the focus on all things mobile – from the data center to the device – is designed energize the existing ecosystem of Intel hardware and software developers and attract new developers.
The second big news is the official introduction of Bay Trail, Intel’s first 22nm “system on a chip” (SoC) for mobile devices. Bay Trial is based on the company’s much-lauded Silvermont microarchitecture and the chip’s low-power/high-performance 3-D transistors are expected to power a wide range of innovative designs.
We think Bay Trail will be a winner in mobile and are excited to introduce it to the world. Designed for both Android and Windows, Bay Trail out-smarts the competition in tablets, 2 in 1s, value laptops and desktops. Don’t take my word for it, though: A recent financial analyst report said that “Bay Trail/Silvermont will have a performance and performance/power advantage over competing ARM-based processors.”
In addition to the CEO keynote and Bay Trail announcement, IDF highlights are expected to include:
A keynote on the future of mobility by Intel anthropologist Genevieve Bell on Thursday, Sept 12.
Keynotes on Wednesday, Sept. 11 from Herman Eul on always on, always connected personal mobility devices, including those powered by Bay Trail; Kirk Skaugen on the innovation happening in mobile computing for both consumers and business; and Doug Fisher discussing Intel’s software and services strategy.
A “mega briefing” for the media from Diane Bryant, general manager of the Data Center and Connect Systems group, on how mobile devices are putting tremendous pressure on servers and related equipment and how Intel is responding by re-architecting datacenters.
Overall, Intel is on a roll. In just the past four months…
The Silvermont chip architecture, unveiled in May, is aimed squarely at low-power requirements in market segments from smartphones to datacenters. Industry observer Anand Shimpi said that Silvermont “…is the first mobile architecture where Intel really prioritized smartphones and tablets, and on paper, it looks very good…”
4th gen Intel Core (code-named Haswell), introduced in June, is inspiring dozens of innovative devices including Ultrabooks, 2 in 1s, all-in-ones, laptops and desktops and at a range of prices. Pundits used to say that Intel Architecture fundamentally couldn’t run at low power. 4th gen Intel Core proves that wrong, running on as little as 4.5 watts and, even more impressively, scaling up to power the highest-performing super computers and data centers. No other chip architecture does this.
Intel’s CEO is aggressively aiming the company to excel in mobility, including tablets, smartphones and 2 in 1s that are in the market today, and also new device areas, some of which are still on the drawing board.
From phones to the data center, Intel is on the front foot, moving aggressively in mobile markets and beyond. See you at IDF!
Lorsque vous essayez d'installer la version 1.1.4 du pilote USB d'Intel® pour Android sur une machine Windows 8, le programme d'installation détecte une condition d'erreur, s'arrête, et affiche le message ci-dessous.
Une fois le fichier est téléchargé, allez dans les paramètres de compatibilité. Faites un clic droit > Properties > Compatibility Tab
Image may be NSFW. Clik here to view.
- Choisissez Windows XP (Service Pack 3) dans la section Compatibility Mode et cliquez sur OK.
- Lancez l'exécutable en tant qu'administrateur. Le système va installer le pilote.
- Ouvrez Eclipse* et connectez votre téléphone Intel. Vous devriez être capable de voir le téléphone.
Notices
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725Image may be NSFW. Clik here to view., or go to: http://www.intel.com/design/literature.htm
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
Any software source code reprinted in this document is furnished under a software license and may only be used or copied in accordance with the terms of that license.
Intel and the Intel logo are trademarks of Intel Corporation in the US and/or other countries.
Este artículo es una introducción a la creación de aplicaciones nativas Android* (desarrolladas mediante NDK, Native Development Kit) para dispositivos basados en arquitectura Intel® (AI).Discutiremos también la exportación de aplicaciones Android NDK que hayan sido creadas para dispositivos con otras arquitecturas a dispositivos basados en AI.Recorreremos dos escenarios, uno para mostrar el proceso de creación de una aplicación Android* básica mediante NDK de principio a fin y el otro para exhibir un proceso simple de exportación de una aplicación existente Android basada en NDK a ser utilizada en dispositivos basados en AI.
Las aplicaciones Android pueden incorporar código nativo mediante el conjunto de herramientas Native Development Kit (NDK).Este permite a los desarrolladores reutilizar código heredado, programar hardware a bajo nivel o diferenciar sus aplicaciones aprovechando características de otro modo no óptimas ni posibles.
Este artículo es una introducción básica sobre cómo crear aplicaciones basadas en NDK para AI de principio a fin y sobre casos de uso sencillos para la exportación de aplicaciones existentes basadas en NDK a dispositivos con AI. Recorreremos paso a paso un escenario de desarrollo simple para demostrar el proceso.
Suponemos que ya tenemos instalado el entorno de desarrollo Android, inclusive Android SDK, Android NDK y tenemos configurado el emulador x86 para probar las aplicaciones.Consulte la sección Android Community del sitio Web de Intel para más información.Para que nuestro entorno de desarrollo sea sencillo, utilizaremos en mayor medida herramientas de la línea de comandos Linux*.
Creación de una aplicación Android basada en NDK para dispositivos basados en AI. Recorrida por una aplicación sencilla
Supongamos que tenemos cierto código anterior que utiliza C y lenguaje ensamblador para el parsing de CPUID (consultehttp://en.wikipedia.org/wiki/CPUID*para saber más de CPUID)A continuación se muestra el código C de nuestro ejemplo cpuid.c (solo como demostración)
Image may be NSFW. Clik here to view.
Desearíamos llamar a cpuid_parse desde nuestra aplicación Android (Propósito de demostración solamente. La función cpuid_parse espera un búfer preasignado) y mostrar la salida dentro de la aplicación.
A continuación, una recorrida paso a paso por la creación de una aplicación Android de principio a fin y la utilización del código nativo heredado anterior.
1. Creación de un proyecto Android predeterminado
Android SDFK cuenta con líneas de comando para generar un estructura de proyecto predeterminado para una típica aplicación “Hola Mundo”.Crearemos primero un proyecto predeterminado y luego modificaremos el código Java para agregar llamadas JNI y código nativo.
Image may be NSFW. Clik here to view.
En la captura de pantalla anterior, hemos creado un directorio llamado labs/lab2 y utilizado la línea de comandos “android” para generar el proyecto predeterminado.Hemos especificado android-15 como el nivel API y denominado nuestra aplicación como “CPUIdApp” con el paquete com.example.cpuid.
Hemos utilizado la línea de comandos “ant” para generar el proyecto en modo debug e instalarlo mediante “adb” (o reinstalarlo si existe en el emulador o en el destino).Asumimos que ya ha tenido un emulador o un dispositivo asociado y es el único dispositivo listado en la salida del comando “adb devices”.
A continuación se ve una captura de pantalla del emulador Android x86 con ICS luego de completar el proceso anterior.
Image may be NSFW. Clik here to view.
Al hacer clic en la aplicación, puede verse el mensaje predeterminado “Hello World” de la aplicación.Ahora modificaremos la aplicación para utilizar código nativo.
2. Invocación de código nativo desde Java sources
El proyecto Android predeterminado genera código Java para un típico proyecto “Hola Mundo” con espacio de nombres dado por el paquete (por ejemplo, com.example.cpuid).La captura de pantalla a continuación muestra el código fuente generado por el archivo Java principal.
Image may be NSFW. Clik here to view.
Para utilizar código C/C++ en nuestro archivo de código Java, necesitamos primeramente declarar la llamada a JNI y cargar la biblioteca nativa como se ve destacado en amarillo en la captura de pantalla a continuación.
Image may be NSFW. Clik here to view.
Como se ve en la declaración, la llamada nativa regresa un string de Java que podemos utilizar en cualquier sitio de nuestro código Java.Como se ve en la captura de pantalla anterior, modificamos TextView para mostrar el string que obtuvimos de nuestra llamada nativa.Ésta está destacada en rojo en el cuadro.
Éste es un caso muy simple de declaración y uso de llamadas nativas JNI en código fuente Java de una aplicación Android.A continuación, utilizaremos la herramienta “javah” para generar los stubs de encabezado JNI para código nativo y agregar o modificar código nativo para seguir los encabezados nativos JNI.
3. Utilización de “javah” para generar encabezados stub de JNI para código nativo
Ahora debemos modificar nuestro código nativo para cumplir con la especificación de la llamada de JNI.“javah” nos ayuda también a generar automáticamente los stub JNI de encabezamiento apropiados en base a los archivos fuente de Java.La herramienta “javah” requiere el archivo compilado de clase Java para generar los encabezados.Así utilizamos la herramienta "ant" para generar rápidamente archivos de clase Java como se muestra en la captura de pantalla a continuación ("ant debug").
Image may be NSFW. Clik here to view.
Utilizar “javah” para generar el encabezado JNI como se muestra en la captura de pantalla (segundo destacado en amarillo).Éste creará un directorio “jni” y el stub encabezado basado en una clase Java.La captura de pantalla a continuación muestra el stub de encabezado JNI-nativo que se generó.
Image may be NSFW. Clik here to view.
Crear archivo de código C correspondiente (“com_example_cpuid_CPUIdApp.c”) para el encabezado generado anteriormente.A continuación se muestra el código:
Image may be NSFW. Clik here to view.
Llamamos al código nativo cpuid_parse y retorna el búfer parseado como string JNI.Estamos listos para compilar el código nativo utilizando el conjunto de herramientas x86 NDK.
4. Generación de código nativo con NDK para x86
Consulte la sección Android Community del sitio Web de Intel (/es-es/articles/ndk-for-ia) para más información sobre la instalación y uso de NDK para AI.
El conjunto de herramientas Android NDK utiliza un sistema de compilación que requiere un archivo make específico "Android.mk" presente en la carpeta del proyecto "jni" para compilar código nativo. Android.mk especifica todos los archivos de código C/C++ nativos a ser compilados, el encabezado y el tipo de compilación (por ejemplo:shared_library).
A continuación se muestra el código nativo del archivo make de Android para nuestro proyecto (“jni/Android.mk”)
Image may be NSFW. Clik here to view.
Este es un escenario sencillo con archivos de código C y especificación de compilar una biblioteca compartida.
Podemos ahora emitir “ndk-build APP_ABI=x86” para compilar nuestro código nativo y generar la biblioteca compartida.El sistema de compilación de Android provee también otro archivo make suplementario “Application.mk” que podemos utilizar para especificar opciones de configuración adicionales.Por ejemplo, podemos especificar todos los ABI compatibles en el archivo Application.mk y la compilación NDK generará bibliotecas compartidas nativas para atender todas las arquitecturas.
Image may be NSFW. Clik here to view.
La captura de pantalla anterior muestra la compilación exitosa de código nativo para x86 y una biblioteca compartida que está siendo generada e instalada.Estamos ahora preparados para recompilar nuestra aplicación Android e instalarla o ejecutarla en un emulador x86 o en el dispositivo final.
5. Recompilación, instalación y ejecución de la aplicación Android NDK para AI
Podemos utilizar “ant debug clean” para eliminar nuestros antiguos archivos compilados y aplicar “ant debug” nuevamente para comenzar una compilación completa del proyecto Android.Utilice “adb” para reinstalar la aplicación en el dispositivo final o el emulador x86 como se ve en la captura de pantalla a continuación.
Image may be NSFW. Clik here to view.
La captura siguiente muestra el ícono de la aplicación dentro del emulador x86 y el resultado de la ejecución de la aplicación dentro del emulador x86.
Image may be NSFW. Clik here to view.
Hemos compilado exitosamente la aplicación Android basada en NDK.
Uso del conjunto de herramientas x86 NDK para exportar aplicaciones NDK existentes a dispositivos basados en AI.
Las aplicaciones Android con código nativo tienen típicamente una estructura de proyecto estándar, con una carpeta “jni” que contiene el código nativo y los archivos de compilación correspondientes Android.mk/Application.mk.En la sección anterior, vimos un ejemplo sencillo de código nativo y el archivo Android.mk correspondiente.
Android NDK nos permite especificar todos los ABI de destino en Application.mk de una vez, y generar automáticamente bibliotecas compartidas nativas para todos los objetivos.El sistema de compilación Android empaquetará automáticamente todas las bibliotecas nativas necesarias dentro de APK y en tiempo de instalación el administrador de paquetes de Android instalará solamente la biblioteca nativa apropiada en base a la arquitectura finalmente usada.
Podemos invocar “ndk-build” o especificar Application.mk
Para exportar una aplicación Android existente con código nativo, usualmente no destinada a x86, el proceso de modificación de la aplicación para hacerla compatible con AI es directo en la mayoría de los casos (como se discute anteriormente), a menos que la aplicación utilice lenguajes o algoritmos de ensamblador de arquitectura específica.Pueden haber otros problemas como la alineación de la memoria o usos de instrucciones específicos de la plataforma.Consulte/es-es/articles/ndk-android-application-porting-methodologiespara más información.
Resumen
Este artículo discute la creación y exportación de aplicaciones Android basadas en NDK para AI.Recorrimos paso a paso un proceso de creación de una aplicación basada en NDK a ser utilizada en AI, de inicio a fin.Discutimos también el proceso sencillo hecho posible por las herramientas NDK para exportar aplicaciones existentes Android basada en NDK a una AI.
Avisos
Intel es marca registrada por Intel Corporation en los EE.UU. y otros países.
LA INFORMACIÓN DE ESTE DOCUMENTO ES PROVISTA EN RELACIÓN CON PRODUCTOS INTEL.ESTE DOCUMENTO NO OTORGA LICENCIA EXPRESA NI IMPLÍCITA SOBRE LOS DERECHOS DE PROPIEDAD INTELECTUAL, NI PRODUCE VÍNCULO NI OBLIGACIÓN ALGUNA.EXCEPTO POR LOS TÉRMINOS Y CONDICIONES DE VENTA DE TALES PRODUCTOS PROVISTOS POR INTEL, INTEL NO ASUME RESPONSABILIDAD DE NINGÚN TIPO Y NIEGA CUALQUIER GARANTÍA EXPRESA O IMPLÍCITA RELACIONADA CON LA VENTA Y/O USO DE PRODUCTOS INTEL, INCLUSIVE RESPONSABILIDAD O GARANTÍAS RELACIONADAS CON LA APTITUD PARA UN PROPÓSITO EN PARTICULAR, COMERCIABILIDAD O VIOLACIÓN DE CUALQUIER PATENTE, DERECHO DE REPRODUCCIÓN U OTRO DERECHO DE PROPIEDAD INTELECTUAL.
Una “Aplicación de misión crítica” es cualquier aplicación en la cual la falla del producto Intel pueda resultar, directa o indirectamente, en lesiones personales o muerte.SI ADQUIERE O UTILIZA PRODUCTOS INTEL PARA CUALQUIER APLICACIÓN DE MISIÓN CRÍTICA, DEBERÁ ASEGURAR A INTEL Y SUS SUBSIDIARIAS, SUBCONTRATISTAS Y FILIALES Y A LOS DIRECTORES, FUNCIONARIOS Y EMPLEADOS CONTRA CUALQUIER DAÑO Y DEMANDAS Y GASTOS RAZONABLES DE ABOGADOS QUE SURJAN, DIRECTA O INDIRECTAMENTE, POR CUALQUIER RECLAMO SOBRE LA RESPONSABILIDAD DEL PRODUCTO, LESIONES PERSONALES O MUERTE QUE SE PRODUZCAN DE CUALQUIER FORMA POR TAL APLICACIÓN DE MISIÓN CRÍTICA, TANTO QUE INTEL O SUS SUBCONTRATISTAS SEAN NEGLIGENTES EN EL DISEÑO, FABRICACIÓN O ADVERTENCIA SOBRE EL PRODUCTO INTEL O CUALQUIERA DE SUS PARTES O NO.
Intel puede realizar cambios a las especificaciones y descripciones de los productos en cualquier momento y sin previo aviso.Los diseñadores no deben basarse en la ausencia de características de cualquier prestación o instrucción marcada como "reservada" o "indefinida".Intel se reserva esas futuras definiciones y no asumirá responsabilidad alguna por conflictos o incompatibilidades que surjan de futuros cambios.Esta información está sujeta a cambios sin aviso.No complete un diseño con esta información.
Los productos descriptos en este documento pueden contener defectos o errores de diseño conocidos como errata que pueden causar que el producto se desvíe de las especificaciones publicadas.Las erratas actuales están disponibles a pedido.
Contacte su oficina de ventas local de Intel o a su distribuidor para obtener las últimas especificaciones y antes de realizar su pedido.
Se pueden obtener copias de documentos con número de pedido y que estén referenciados en este documento o en otra literatura de Intel llamando al 1-800-548-4725 o en:http://www.intel.com/design/literature.htm * Otros nombres o marcas pueden ser propiedad de terceros.