| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
Change-Id: I1ff304d540c75c4d27c5fddeb7315e177d366fb3
|
| |\
| |
| |
| |
| | |
* commit 'f39c1088f08e194a212f55a246bac85036f62a42':
"debug" in those modules is discouraged
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Because we need those modules for only emulator builds.
If you mark them as "debug", you'll get them even if you are doing a
real device userdebug/eng build.
Instead, we should add their module names to the emulator product config
in build/target/product/emulator.mk.
Bug: 8276818
Change-Id: I58988ce49804583b06e7d93380c44ba800448216
|
| |/
|
|
| |
Change-Id: I1008d779a9d333dc8968812d22e387b804e3c570
|
| |
|
|
|
|
|
|
| |
Missing project ready to go.
This reverts commit 9642da51023d5dadaa3b544f100e84dfdbcabea3
Change-Id: I08a43e9cdcb06bd6b74fd08809bbaf801c2eb44f
|
| |
|
|
|
|
|
|
| |
More dependent projects than I realized
This reverts commit a84522d2f373aeb859490ec4015f548956a4818d
Change-Id: Ifb50c94a2a5dfa91573f07695d8f3bfcadc79742
|
| |
|
|
|
|
|
|
| |
This is needed for passing buffers to the camera HAL for reprocessing.
Bug: 6243944
Change-Id: Ibf8d15aead571ddb3b62674cf7afe0d508ca24e7
|
| |
|
|
|
|
|
| |
Stop using CAMERA2_HAL_PIXEL_FORMAT_OPAQUE.
Bug: 6243944
Change-Id: I96ea30228b126b4eed560a760269cb50bbbb62f7
|
| |
|
|
|
|
|
| |
Allow RAW_SENSOR to be used for any combination of CPU read/write and
Camera read/write, instead of only camera->cpu or cpu->camera.
Change-Id: I032b9531e9069a202c1a3767b77975c808703285
|
| |
|
|
|
| |
Bug: 6243944
Change-Id: I5f416ab0ae15143df422c0f79d91841984b4fabe
|
| |
|
|
|
|
|
|
|
|
|
| |
Have gralloc_alloc be able to select the appropriate pixel format
given the endpoints, triggered by new
GRALLOC_EMULATOR_PIXEL_FORMAT_AUTO format.
Currently supports camera->screen, and camera->video encoder.
Bug: 6243944
Change-Id: Ib1bf8da8d9184ac99e7f50aad09212c146c32809
|
| |
|
|
|
|
|
| |
This is needed for Camera HAL2 video recording.
Bug: 6243944
Change-Id: I47a3e65117881612fb95068a80f811cc8378fbc6
|
| |
|
|
|
|
|
|
|
|
| |
A few ANativeWindow methods were updatd to take a Sync HAL file
descriptor, and the existing methods were renamed with a _DEPRECATED
suffix. Since the emulator graphics acceleration doesn't yet support
the sync HAL, this change continues calling the deprecated functions
via their new names.
Change-Id: I5b1760811fafb6723ede887e32e63f94cbaeffe5
|
| |
|
|
|
| |
Bug: 6243944
Change-Id: I2864bc59be9df8741639a291c71e2f65dde5bc0b
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because of the way the SDK and Android system images are branched,
host code that goes into the SDK tools can't live in the same
repository as code that goes into the system image. This change keeps
the emugl host code in sdk.git/emulator/opengl while moving the emugl
system code to development.git/tools/emulator/opengl.
A few changes were made beyond simply cloning the directories:
(a) Makefiles were modified to only build the relevant components. Not
doing so would break the build due to having multiple rule
definitions.
(b) Protocol spec files were moved from the guest encoder directories
to the host decoder directories. The decoder must support older
versions of the protocol, but not newer versions, so it makes
sense to keep the latest version of the protocol spec with the
decoder.
(c) Along with that, the encoder is now built from checked in
generated encoder source rather than directly from the protocol
spec. The generated code must be updated manually. This makes it
possible to freeze the system encoder version without freezing the
host decoder version, and also makes it very obvious when a
protocol changes is happening that will require special
backwards-compatibility support in the decoder/renderer.
(d) Host-only and system-only code were removed from the repository
where they aren't used.
(e) README and DESIGN documents were updated to reflect this split.
No actual source code was changed due to the above.
Change-Id: I2c936101ea0405b372750d36ba0f01e84d719c43
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The emulator GLES support has two interfaces: a host shared library
interface used by QEMU, and a protocol between the platform and the
host. The host library interface is not versioned; QEMU and the GLES
renderer must match. The protocol on the other hand must be backwards
compatible: a new GLES renderer must support an older platform image.
Thus for branching purposes it makes more sense to put the GLES
renderer in sdk.git, which is branched along with qemu.git for SDK
releases. Platform images will be built against the protocol version
in the platform branch of sdk.git.
Change-Id: Ie73fce12815c9740e27d0f56caa53c6ceb3d30cc
|
| |
|
|
|
|
| |
Also move some atree copy to sdk.git where they belong.
Change-Id: Iab62343806917f24f47d15b9dea75e44422d8764
|
| |\
| |
| |
| |
| | |
* commit '71aa2fcac1cd1e5d59c210b5dd332ca4aefba530':
EmuGL: Deliver every frame to a callback
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To enable multi-touch on a tethered device, allow a callback to be
registered with the OpenGL renderer. On every frame, the framebuffer
is read into system memory and provided to the callback, so it can be
mirrored to the device.
This change is co-dependent on Idae3b026d52ed8dd666cbcdc3f3af80175c90ad3
in external/qemu.
Change-Id: I03c49bc55ed9e66ffb59462333181f77e7e46035
|
| |\|
| |
| |
| |
| | |
* commit '76780669f9867587693563358ccdc903e9cdcbba':
Added rules to build 64-bit libraries for 64-bit emulator
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
All ten libraries can now be built in 64-bit named "lib64*" (*)
in addition to the original 32-bit form named "lib*".
Also, dlopen "lib64*so" in 64-bit.
(*) eg. In Ubuntu, all can be built with the following command:
make out/host/linux-x86/lib/lib64OpenglRender.so \
out/host/linux-x86/lib/lib64EGL_translator.so \
out/host/linux-x86/lib/lib64GLES_CM_translator.so \
out/host/linux-x86/lib/lib64GLES_V2_translator.so
Rules to build static libraries lib64log.a, lib64cutils.a and lib64utils.a
they depend were added in other CLs.
Change-Id: I3afb64de6dda1d55dbd1b4443d2dbc78a683b19f
|
| |\|
| |
| |
| | |
Change-Id: I41f097b5b96c4d000b1748b9e0411497d323556a
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
1. "emugen" generates four *dec.cpp files containing code like this
to decode offset to pointer in stream
tmp = *(T *)(ptr + 8 + 4 + 4 + 4 + *(size_t *)(ptr +8 + 4 + 4));
If *dec.cpp are compiled in 64-bit, size_t is 8-byte and dereferencing of
it is likley to get wild offset for dereferencing of *(T *) to crash the
code. Solution is to define tsize_t for "target size_t" instead
of using host size_t.
2. Cast pointer to "uintptr_t" instead of "unsigned int" for 2nd param of
ShareGroup::getGlobalName(NamedObjectType, ObjectLocalName/*64bit*/).
3. Instance of EGLSurface, EGLContext and EGLImageKHR are used as 32-bit
key for std::map< unsigned int, * > SurfacesHndlMap, ContextsHndlMap,
and ImagesHndlMap, respectively. Cast pointer to uintptr_t and assert
upper 32-bit is zero before passing to map::find().
4. Instance of GLeglImageOES is used to eglAttachEGLImage() which expect
"unsigned int". Cast it to uintptr_t and assert upper 32-bit is zero.
5. The 5th param to GLEScontext::setPointer is GLvoid* but contains 32-bit
offset to vbo if bufferName exists. Cast it to uintptr_t and assert
upper 32-bit is zero.
6. Use %zu instead of %d to print size_t
7. Cast pointer to (uintptr_t) in many other places
Change-Id: Iba6e5bda08c43376db5b011e9d781481ee1f5a12
|
| |\ \
| | |
| | |
| | |
| | | |
* commit '9322c5cb2524c4f35408768ee3d1b8030f0360f9':
Work around a y-invert bug on Macs w/ Intel GPU
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On Macs running OS X 10.6 and 10.7 with Intel HD Graphics 3000, some
screens or parts of the screen are displayed upside down. The exact
conditions/sequence that triggers this aren't known yet; I haven't
been able to reproduce it in a standalone test. This also means I
don't know whether it is a driver bug, or a bug in the OpenglRender or
Translator code that just happens to work elsewhere.
Thanks to zhiyuan.li@intel.com for a patch this change is based on.
Change-Id: I04823773818d3b587a6951be48e70b03804b33d0
|
| |\| |
| | |
| | |
| | |
| | | |
* commit '767d08948790527e9df951a752703938ff517d30':
Delete dead code.
|
| | | |
| | |
| | |
| | | |
Change-Id: I5b87fac4e2140a903221a1f68b16fa6a96e5effc
|
| |\| |
| | |
| | |
| | |
| | | |
* commit '7ef79e49c7e8f1ecf19a92114f41de39d102a3e8':
EmuGL: don't [de]queue buffers in eglMakeCurrent
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Whenever a surface was attached to a context, it was dequeing a new
buffer, and enqueing it when detached. This has the effect of doing a
SwapBuffers on detach/attach cycle, which is just wrong and
occasionally caused visible glitches (e.g. animations going backwards
for one frame). It also broke some SurfaceTexture tests which
(validly) depend on specific buffer production/consumption counts.
Change-Id: Ibd4761e8842871b79fd9edf52272900193cb672d
|
| |\| |
| | |
| | |
| | |
| | | |
* commit 'cbc7300cb209a4fd3b250546c72c35b7bb0aa8a1':
EmuGL: enable SurfaceTexture async mode
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Pass the swap interval from eglSwapInterval to the native window so it
can enable/disable SurfaceTexture's async mode. Fixes the deadlock in
SurfaceTextureGLToGLTest.EglDestroySurfaceUnrefsBuffers.
Change-Id: I19bf69247341f5617223722df63d6c7f8cf389c6
|
| |\| |
| | |
| | |
| | |
| | |
| | |
| | | |
ics-mr1
* commit '8bd39ae17caac6d275d785ce9420d7e643f15eb9':
EmuGL: GLESv2 support for OES_EGL_image_external
|
| | | |
| | |
| | |
| | | |
Change-Id: I8911328d5dcccdf4731bd2d8fd953c12fdec5f1b
|
| |\| |
| | |
| | |
| | |
| | | |
* commit '00e61338b8374de090e81537047846ca06f88280':
EmuGL: refinements to GLESv1 image_external
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* EGLImageTargetRenderbufferStorageOES was incorrectly accepting
TEXTURE_EXTERNAL_OES as a target. Revert that; the host GL will
correctly reject it with INVALID_ENUM.
* Handle the REQUIRED_TEXTURE_IMAGE_UNITS_OES texparameter query.
* Validate texture parameters set on TEXTURE_EXTERNAL textures;
otherwise invalid parameters would work on the emulator but not on a
real device.
Change-Id: I49a088608d58a9822f33e5916bd354eee3709127
|
| |\| |
| | |
| | |
| | |
| | | |
* commit 'ac018fe3f65e18083a2dd317f73e4139bfa5fee6':
EmuGL: refcount ColorBuffers on the host
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The gralloc API assumes system-wide reference counting of gralloc
buffers. The host-GL accelerated gralloc maps buffers to host-side
ColorBuffer objects, but was destroying them unconditionally in
gralloc_free(), ignoring any additional references from
gralloc_register_buffer().
This affected the SurfaceTexture gralloc buffers used by the
Browser/WebView. For some reason these buffers are actually allocated
by SurfaceFlinger and passed back to the WebView through Binder. But
since SurfaceFlinger doesn't actually need the buffer for anything,
sometime after the WebView has called gralloc_register_buffer()
SurfaceFlinger calls gralloc_free() on it. This caused the host
ColorBuffer to be destroyed long before the WebView is done using it.
Change-Id: I33dbee887a48a6907041cf19e9f38a1f6c983eff
|
| |\| |
| | |
| | |
| | |
| | | |
* commit '91d4e8e195592dbc812441597118452f887ea07d':
GLESv2 translator: don't delete EGLImage textures
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Copy changes faaf1553cfa39c23ceb198ba7edbd46ff3a11f7a and
f37a7ed6c5c609a3afc33f81bf50893362917ae6 from the GLESv1 translator to
the GLESv2 translator. After this, both translators use the same logic
for glEGLImageTargetTexture2DOES().
Change-Id: I0a95bf2301df7b7428abc593f38170edf4cbda30
|
| |\| |
| | |
| | |
| | |
| | | |
* commit '0e981c83041878e6a05b0a996879160fd0f320cf':
EmuGL: Fix heap corruption
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Off-by-two bug when removing textures from the tracking array could
overwrite malloc's mem chunk data structure, usually resulting in a
heap corruption abort on a later malloc/realloc/free.
Bug: 5951738
Change-Id: I11056bb62883373c2a3403f53899347ff8cdabf2
|
| |\| |
| | |
| | |
| | |
| | | |
* commit '4f66a14d5311ee94d5a498c8e0049f6b95d4d0d6':
EmuGL: handle NULL data in glBufferData
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The data pointer argument to glBufferData can be NULL; this
[re]allocates the buffer while leaving the contents undefined.
Bug: 5833436
Change-Id: Ia1ddf62e2cd2c59d3d631e01d23d7c557ca5a52e
|
| |\| |
| | |
| | |
| | |
| | | |
* commit 'f0baef2fed555e87a0910e5aab6b8b763487b350':
EmuGL: misc small cleanups
|
| | |\ \ |
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Disable verbose debug spam.
* Add missing GL enum to utility function. The default case was
returning the correct size, so this doesn't fix any bugs, just
removes some logcat spam.
* Comment and whitespace corrections.
Change-Id: I83fb8644331ae1072d6a8dae9c041da92073089f
|
| |\| | |
| |_|/
|/| |
| | |
| | | |
* commit '409c49c526508b5fa36f8bc6edc1fc70cba5a3e1':
EmuGL: fix GL view position in window on OS X
|
| | |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The code that creates the GL-accelerated screen view wasn't converting
the upper-left-relative coordinates used within the emulator to the
lower-left coordinates used by the Cocoa APIs on OS X. Since most
skins have the screen view centered vertically this often just
happened to work.
Bug: 5782118
Change-Id: I2f96ee181e850df5676d10a82d86c94421149b40
|
| |/
|
|
|
|
|
| |
Ensure the dynamic library library gets linked in with SDL
to fix compilation errors
Change-Id: I32e6929088eaf73d707e89d10392c658b58ec465
|
| |\
| |
| |
| |
| | |
* commit 'b0a30e43889415a9a40b9519392ad3be295b9465':
EmuGL: remove broken EGL buffer refcounting
|