| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The attachInfo from ViewRootImpl is not reliable
for DisplayArea manipulation or windowless window.
To fix this problem, we use the transform matrix of
InputWindowHandle, which could transform the bounds
from window coorindate to screen coordinate. We also
transfrom the bounds to logical display coordinates
with the associated display matrix.
Besides, we also record the magnification spec of the window,
which could get the bounds before magnification. We use
this value to decide the property 'visibleToUser'.
Bug: 200797785
Test: atest android.accessibilityservice.cts WindowInfoTest
atest com.android.server.accessibility
Change-Id: I0917b04fe8b027fb2bd932a6f0604ba1449ebc66
|
| |
|
|
|
|
| |
Bug: 185399833
Test: atest android.view.WindowInfoTest
Change-Id: I416f0330966e4a1880a2b6bdd6f24e6077cbe1cc
|
| |
|
|
|
|
|
|
|
|
| |
To support non-rectangular visible area for windows, add new APIs
about RegionInScreen in A11yWindowInfo to represent the actual
interact-able area of a window.
Bug: 132146558
Test: a11y CTS & unit tests
Change-Id: I86bb6bc8c567e09f01a3f853a3cffd896ce934c8
|
| |
|
|
|
|
|
|
|
| |
To support multi-display, we need to retrive display id from
AccessibilityWindowInfo.
Test: atest ctsAccessibilityWindoInfoTest
Bug: 132851274
Change-Id: Ia59bd83e223ecf5c9af7e4b0a52150e0353192c7
|
| |
|
|
|
|
|
|
|
| |
If they were null, then the Parcelable would fail to work.
Bug: 126726802
Test: manual
Change-Id: I7929ffa2f20e5de1c8e68e8263cca99496e9d014
Exempt-From-Owner-Approval: Trivial API annotations
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some accessibility actions (e.g., click) aren't counted as touch
currently. Hence, windows that are monitoring touches outside of
themselves wouldn't be notified when an accessibility action
takes place on another window.
WindowsInfo doesn't record if a window monitoring touches outside
nowadays. In order to find out this kind of window, add the field
for it.
Bug: 76228634
Bug: 62725890
Test: check if dialogs (e.g., volume panel or a11y menu) dismiss
after performing a11y action click.
Test: atest CtsAccessibilityServiceTestCases
Test: atest CtsAccessibilityTestCases
Test: atest WindowStateTests
Change-Id: I4fd84848b8a445e6df22251d68449ae8b502c601
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding for status bar, nav bar, and global actions dialog.
Also removing some extra code from global actions dialog
that populated window state changes. Apps in general don't
need this extra information, so we don't need to maintain
it in SysUi either.
In verifying the fix, I noticed that all windows were
considered anchored because of a mismatch between long and
int. Fixing that too.
Bug: 73131182
Test: With the testback a11y service, verified that these
titles do indeed appear in the window information provided
to accessibility services. Also noted that windows are no
longer reporting themselves as anchored.
Change-Id: Ie09fbb88250b3c9663d6c28001e0ce9f70c67954
|
| |
|
|
|
|
|
|
| |
Bug: 62344706
Test: Now able to bring up keyboard in split-screen mode.
Also a11y CTS and unit tests pass.
Change-Id: Ic4340425680c89e8fc5e586aa1d363b01fd69763
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Exposing actions from the PIP InputConsumer to accessibility,
stripping all actions from a covered PIP app, and adding the
InputConsumer's actions on the PIP app's root view.
We were also using an "undefined" accessibility ID to mean
three different things: a root view, a host view of a virtual
view hierarchy, and a truly undefined view. I've introduced
new values for cases where the id could be defined.
Also gathering all window IDs into one place to reduce the
chance of collisions.
Bug: 34773134
Test: In progress. Current cts passes.
Change-Id: I97269741a292cf406272bf02359c76c396f84640
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Plumbing through the title of windows so support multiwindow
accessibility.
Adding ability to determine the anchor of a pop-up window so the pop-up
can be traversed as part of its anchor.
Bug: 27687627
Bug: 8449376
Change-Id: I59e98a29fb90029407a26de5bf3d900fed5dd627
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. The APIs for introspecting interactive windows were reporting only
the touchable windows but were missing the focused window. The user
can interact with the latter by typing, hence it should always be
reported. Also this was breaking backwards compatibility as if the
focused window is covered by a modal one, the focused window was not
reporeted and this was putting the active window in a bad state as
the latter is either the focused window or the one the user is touching.
2. Window change events are too frequent as on window transition things
are chanign a lot. Now we are trottling the windows changed events
at the standard recurring accessibility event interval.
3. Fixed a wrong flag comparison and removed some unneded code.
buy:15434666
bug:15432989
Change-Id: I825b33067e8cbf26396a4d38642bde4907b6427a
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. The old introspection model was allowing querying only the active window
which is the one the user is touching or the focused one if no window is
touched. This was limiting as auto completion drop downs were not inspectable,
there was not way to know when the IME toggles, non-focusable windows were
not inspectable if the user taps them as until a screen-reader starts
introspecting the users finger is up, accessibility focus was limited to
only one window and the user couldn't use gestures to visit the whole UI,
and other things I can't remember right now.
The new APIs allow getting all interactive windows, i.e. ones that a
sighted user can interact with. This prevents an accessibility service
from interacting with content a sighter user cannot. The list of windows
can be obtained from an accessibility service or the host window from an
accessibility node info. Introspecting windows obey the same rules for
introspecting node, i.e. the service has to declare this capability
in its manifest.
When some windows change accessibility services receive a new type
of event. Initially the types of windows is very limited. We provide
the bounds in screen, layer, and some other properties which are
enough for a client to determined the spacial and hierarchical
relationship of the windows.
2. Update the documentation in AccessibilityService for newer event types.
3. LongArray was not removing elements properly.
4. Composite accessibility node ids were not properly constructed as they
are composed of two ints, each taking 32 bits. However, the values for
undefined were -1 so composing a 64 long from -1, -1 prevents from getting
back these values when unpacking.
5. Some apps were generating inconsistent AccessibilityNodeInfo tree. Added
a check that enforces such trees to be well formed on dev builds.
6. Removed an necessary code for piping the touch exploration state to
the policy as it should just use the AccessibilityManager from context.
7. When view's visibility changed it was not firing an event to notify
clients it disappeared/appeared. Also ViewGroup was sending accessibility
events for changes if the view is included for accessibility but this is
wrong as there may be a service that want all nodes, hence events from them.
The accessibility manager service takes care of delivering events from
not important for accessibility nodes only to services that want such.
8. Several places were asking for prefetching of sibling but not predecessor
nodes which resulted in prefetching of unconnected subtrees.
9. The local AccessibilityManager implementation was relying on the backing
service being ready when it is created but it can be fetched from a context
before that. If that happens the local manager was in a broken state forever.
Now it is more robust and starts working properly once the backing service
is up. Several places were lacking locking.
bug:13331285
Change-Id: Ie51166d4875d5f3def8d29d77973da4b9251f5c8
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
1. The way for computing the magnified region was simplistic and
incorrect. It was ignoring window layering resulting in broken
behavior. For example, if the IME is up, then the everything else
is magnifed and the IME not. Now the keyguard appears and covers
the IME but the magnified region does not expand while it would
since the keyguard completely covers the not magnified IME window.
bug:7138937
Change-Id: I21414635aefab700ce75d40f3e913c1472cba202
|
|
|
This change is the initial check in of the screen magnification
feature. This feature enables magnification of the screen via
global gestures (assuming it has been enabled from settings)
to allow a low vision user to efficiently use an Android device.
Interaction model:
1. Triple tap toggles permanent screen magnification which is magnifying
the area around the location of the triple tap. One can think of the
location of the triple tap as the center of the magnified viewport.
For example, a triple tap when not magnified would magnify the screen
and leave it in a magnified state. A triple tapping when magnified would
clear magnification and leave the screen in a not magnified state.
2. Triple tap and hold would magnify the screen if not magnified and enable
viewport dragging mode until the finger goes up. One can think of this
mode as a way to move the magnified viewport since the area around the
moving finger will be magnified to fit the screen. For example, if the
screen was not magnified and the user triple taps and holds the screen
would magnify and the viewport will follow the user's finger. When the
finger goes up the screen will clear zoom out. If the same user interaction
is performed when the screen is magnified, the viewport movement will
be the same but when the finger goes up the screen will stay magnified.
In other words, the initial magnified state is sticky.
3. Pinching with any number of additional fingers when viewport dragging
is enabled, i.e. the user triple tapped and holds, would adjust the
magnification scale which will become the current default magnification
scale. The next time the user magnifies the same magnification scale
would be used.
4. When in a permanent magnified state the user can use two or more fingers
to pan the viewport. Note that in this mode the content is panned as
opposed to the viewport dragging mode in which the viewport is moved.
5. When in a permanent magnified state the user can use three or more
fingers to change the magnification scale which will become the current
default magnification scale. The next time the user magnifies the same
magnification scale would be used.
6. The magnification scale will be persisted in settings and in the cloud.
Note: Since two fingers are used to pan the content in a permanently magnified
state no other two finger gestures in touch exploration or applications
will work unless the uses zooms out to normal state where all gestures
works as expected. This is an intentional tradeoff to allow efficient
panning since in a permanently magnified state this would be the dominant
action to be performed.
Design:
1. The window manager exposes APIs for setting accessibility transformation
which is a scale and offsets for X and Y axis. The window manager queries
the window policy for which windows will not be magnified. For example,
the IME windows and the navigation bar are not magnified including windows
that are attached to them.
2. The accessibility features such a screen magnification and touch
exploration are now impemented as a sequence of transformations on the
event stream. The accessibility manager service may request each
of these features or both. The behavior of the features is not changed
based on the fact that another one is enabled.
3. The screen magnifier keeps a viewport of the content that is magnified
which is surrounded by a glow in a magnified state. Interactions outside
of the viewport are delegated directly to the application without
interpretation. For example, a triple tap on the letter 'a' of the IME
would type three letters instead of toggling magnified state. The viewport
is updated on screen rotation and on window transitions. For example,
when the IME pops up the viewport shrinks.
4. The glow around the viewport is implemented as a special type of window
that does not take input focus, cannot be touched, is laid out in the
screen coordiates with width and height matching these of the screen.
When the magnified region changes the root view of the window draws the
hightlight but the size of the window does not change - unless a rotation
happens. All changes in the viewport size or showing or hiding it are
animated.
5. The viewport is encapsulated in a class that knows how to show,
hide, and resize the viewport - potentially animating that.
This class uses the new animation framework for animations.
6. The magnification is handled by a magnification controller that
keeps track of the current trnasformation to be applied to the screen
content and the desired such. If these two are not the same it is
responsibility of the magnification controller to reconcile them by
potentially animating the transition from one to the other.
7. A dipslay content observer wathces for winodw transitions, screen
rotations, and when a rectange on the screen has been reqeusted. This
class is responsible for handling interesting state changes such
as changing the viewport bounds on IME pop up or screen rotation,
panning the content to make a requested rectangle visible on the
screen, etc.
8. To implement viewport updates the window manger was updated with APIs
to watch for window transitions and when a rectangle has been requested
on the screen. These APIs are protected by a signature level permission.
Also a parcelable and poolable window info class has been added with
APIs for getting the window info given the window token. This enables
getting some useful information about a window. There APIs are also
signature protected.
bug:6795382
Change-Id: Iec93da8bf6376beebbd4f5167ab7723dc7d9bd00
|