Trilinear filtering can currently only be enabled for OpenGL renderers.
Do not print a warning if the renderer is not OpenGL, as it can confuses
users, while nothing is wrong.
On macOS with renderer "metal", HiDPI scaling may be incorrect on
initialization when several displays are connected.
Resetting the window size fixes the problem.
Refs #15 <https://github.com/Genymobile/scrcpy/issues/15>
Position and scale the content "manually" instead of relying on the
renderer "logical size".
This avoids possible rounding differences between the computed window
size and the content size, causing one row or column of black pixels on
the bottom or on the right.
This also avoids HiDPI scale issues, by computing the scaling manually.
This will also enable to draw items at their expected size on the screen
(unscaled).
Fixes#15 <https://github.com/Genymobile/scrcpy/issues/15>
In maximized state (but not fullscreen), it was possible to resize to
fit the device screen (with Ctrl+x or double-clicking on black borders).
This caused problems on macOS with the "expand to fullscreen" feature,
which behaves like a fullscreen mode but is seen as maximized by SDL.
In that state, resizing to fit causes unexpected results.
To keep the behavior consistent on all platforms, just disable "resize
to fit" when the window is maximized.
On Windows, in maximized+fullscreen state, disabling fullscreen mode
unexpectedly triggers the "restored" then "maximized" events, leaving
the window in a weird state (maximized according to the events, but not
maximized visually).
Moreover, apply_pending_resize() asserts that fullscreen is disabled.
To avoid the issue, if fullscreen is set, just ignore the "restored"
event.
If the content size changes (due to rotation for example) while the
window is maximized or fullscreen, the resize must be applied once
fullscreen and maximized are disabled.
The previous strategy consisted in storing the windowed size, computing
the target size on rotation, and applying it on window restoration. But
tracking the windowed size (while ignoring the non-windowed size) was
tricky, due to unspecified order of SDL events (e.g. size changes can be
notified before "maximized" events), race conditions when reading window
flags, different behaviors on different platforms...
To simplify the whole resize management, store the old content size (the
frame size, possibly rotated) when it changes while the window is
maximized or fullscreen, so that the new optimal size can be computed on
window restoration.
The window dimensions are integers, so resizing to fit the content may
not be exact.
When computing the optimal size, it could cause to reduce alternatively
the width and height by few pixels, making the "optimal size" unstable.
To avoid this problem, check if the optimal size is already correct
either by keeping the width or the height.
Move the window-to-frame coordinates conversion from the input manager
to the screen.
This will allow to apply more screen-related transformations without
impacting the input manager.
Add Ctrl+Left and Ctrl+Right shortcuts to rotate the display (the
content of the scrcpy window).
Contrary to --lock-video-orientation, the rotation has no impact on
recording, and can be changed dynamically (and immediately).
Fixes#218 <https://github.com/Genymobile/scrcpy/issues/218>
Now, get_window_size() returns the current window size (fullscreen or
not), while get_windowed_window_size() always returned the windowed size
(the size when fullscreen is disabled).
The SDL video subsystem is not necessary if we don't display the video.
Move the sdl_init_and_configure() function from screen.c to scrcpy.c,
because it is not only related to the screen display.
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)
It is very convenient when I play mobile game and watch video at the
same time.
Tested on Linux mint Cinnamon as well as Windows 10.
Signed-off-by: Yu-Chen Lin <npes87184@gmail.com>
SDL_CreateTexture() is called both during initialization and on frame
size change.
To avoid inconsistent changes to arguments value, factorize them to a
single function create_texture().
The High DPI support is enabled by default, so that the renderer use the
full definition of High DPI screens.
However, there are still mouse coordinates problems on some MacOS having
High DPI support (but not all), so expose a way to disable it.
Use high DPI if available.
Note that on Mac OS X, setting this flag is not sufficient:
> On Apple's OS X you must set the NSHighResolutionCapable Info.plist
> property to YES, otherwise you will not receive a High DPI OpenGL
> display.
<https://wiki.libsdl.org/SDL_CreateWindow#flags>
screen_render() should not be called on initialization:
1. it is useless, since the window is hidden until the first frame;
2. it writes an empty texture (probably green) to the renderer.
The SDL clean up does not crash anymore on exit, probably since the
memory corruption caused by calling SDLNet_TCP_Close() too early has
been resolved.
On startup, the client has to:
1. listen on a port
2. push and start the server to the device
3. wait for the server to connect (accept)
4. read device name and size
5. initialize SDL
6. initialize the window and renderer
7. show the window
From the execution of the app_process command to start the server on the
device, to the execution of the java main method, it takes ~800ms. As a
consequence, step 3 also takes ~800ms on the client.
Once complete, the client initializes SDL, which takes ~500ms.
These two expensive actions are executed sequentially:
HOST DEVICE
listen on port | |
push/start the server |----------------->|| app_process loads the jar
accept the connection . ^ ||
. | ||
. | WASTE ||
. | OF ||
. | TIME ||
. | ||
. | ||
. v X execution of our java main
connection accepted |<-----------------| connect to the host
init SDL || |
|| ,----------------| send frames
|| |,---------------|
|| ||,--------------|
|| |||,-------------|
|| ||||,------------|
init window/renderer | |||||,-----------|
display frames |<++++++-----------|
(many frames skipped)
The rationale for step 3 occuring before step 5 is that initializing
SDL replaces the SIGTERM handler to receive the event in the event loop,
so pressing Ctrl+C during step 5 would not work (since it blocks the
event loop).
But this is not so important; let's parallelize the SDL initialization
with the app_process execution (we'll just add a timeout to the
connection):
HOST DEVICE
listen on port | |
push/start the server |----------------->||app_process loads the jar
init SDL || ||
|| ||
|| ||
|| ||
|| ||
|| ||
accept the connection . ||
. X execution of our java main
connection accepted |<-----------------| connect to the host
init window/renderer | |
display frames |<-----------------| send frames
|<-----------------|
In addition, show the window only once the first frame is available to
avoid flickering (opening a black window for 100~200ms).
Note: the window and renderer are initialized after the connection is
accepted because they use the device information received from the
device.
Replace screen_update() by a higher-level screen_update_frame() handling
the whole frame updating, so that scrcpy.c just call it without managing
implementation details.
The video screen size on the client may differ from the real device
screen size (e.g. the video stream may be scaled down). As a
consequence, mouse events must be scaled to match the real device
coordinates.
For this purpose, make the client send the video screen size along with
the absolute pointer location, and the server scale the location to
match the real device size before injecting mouse events.
To control the device from the computer:
- retrieve mouse and keyboard SDL events;
- convert them to Android events;
- serialize them;
- send them on the same socket used by the video stream (but in the
opposite direction);
- deserialize the events on the Android side;
- inject them using the InputManager.