The options --no-display and --no-control are independent.
The controller was not initialized when no display was requested,
because it was assumed that no control could occur without display. But
that's not true (anymore): for example, it is possible to pass
--turn-screen-off.
Fixes#2426 <https://github.com/Genymobile/scrcpy/issues/2426>
The input manager was partially initialized statically, but a call to
input_manager_init() was needed anyway, so initialize all the fields
from the "constructor".
This is consistent with the initialization of the other structs.
The screen may not be destroyed immediately on close to avoid undefined
behavior, because it may still receive events from the decoder.
But the visual window must still be closed immediately.
The video buffer is now an internal detail of the screen component.
Since the screen is plugged to the decoder via the frame sink trait, the
decoder does not access to the video buffer anymore.
This flag forced the decoder to wait for the previous frame to be
consumed by the display.
It was initially implemented as a compilation flag for testing, not
intended to be exposed at runtime. But to remove ifdefs and to allow
users to test this flag easily, it had finally been exposed by commit
ebccb9f6cc.
In practice, it turned out to be useless: it had no practical impact,
and it did not solve or mitigate any performance issues causing frame
skipping.
But that added some complexity to the codebase: it required an
additional condition variable, and made video buffer calls possibly
blocking, which in turn required code to interrupt it on exit.
To prepare support for multiple sinks plugged to the decoder (display
and v4l2 for example), the blocking call used for pacing the decoder
output becomes unacceptable, so just remove this useless "feature".
The screen receives callbacks from the decoder, fed by the stream.
The decoder is run from the stream thread, so waiting for the end of
stream is sufficient to avoid possible use-after-destroy.
When --no-display was passed, screen_destroy() was called while
screen_init() was never called.
In practice, it did not crash because it just freed NULL pointers, but
it was still incorrect.
A skipped frame is detected when the producer offers a frame while the
current pending frame has not been consumed.
However, the producer (in practice the decoder) is not interested in the
fact that a frame has been skipped, only the consumer (the renderer) is.
Therefore, notify frame skip via a consumer callback. This allows to
manage the skipped and rendered frames count at the same place, and
remove fps_counter from decoder.
As soon as the stream is started, the video buffer could notify a new
frame available.
In order to pass this event to the screen without race condition, the
screen must be initialized before the screen is started.
The functions SDL_malloc(), SDL_free() and SDL_strdup() were used only
because strdup() was not available everywhere.
Now that it is available, use the native version of these functions.
The size, point and position structs were defined in common.h. Move them
to coords.h so that common.h could be used for generic code to be
included in all source files.
This avoids to pass specific options values individually. Since these
function are static (internal to the file), this is not a problem to
make them depend on scrcpy_options.
Refs #1623 <https://github.com/Genymobile/scrcpy/pull/1623>
Signed-off-by: Romain Vimont <rom@rom1v.com>
The header scrcpy.h is intended to be the "public" API. It should not
depend on other internal headers.
Therefore, declare all required structs in this header and adapt
internal code.
Add a command-line option to force "adb forward", without attempting
"adb reverse" first.
This is especially useful for using SSH tunnels without enabling remote
port forwarding.
The verbosity was set either to info (in release mode) or debug (in
debug mode).
Add a command-line argument to change it, so that users can enable debug
logs using the release:
scrcpy -Vdebug
Position and scale the content "manually" instead of relying on the
renderer "logical size".
This avoids possible rounding differences between the computed window
size and the content size, causing one row or column of black pixels on
the bottom or on the right.
This also avoids HiDPI scale issues, by computing the scaling manually.
This will also enable to draw items at their expected size on the screen
(unscaled).
Fixes#15 <https://github.com/Genymobile/scrcpy/issues/15>
By default, Ctrl+C just kills the process on Windows. This caused
corrupted video files on recording.
Handle Ctrl+C properly to clean up properly.
Fixes#818 <https://github.com/Genymobile/scrcpy/issues/818>
Now that the server can access the Android settings and clean up
properly, handle the "show touches" option from the server.
The initial state is now correctly restored, even on device
disconnection.
Accept a range of ports to listen to, so that it does not fail if
another instance of scrcpy is currently starting.
The range can be passed via the command line:
scrcpy -p 27183:27186
scrcpy -p 27183 # implicitly 27183:27183, as before
The default is 27183:27199.
Closes#951 <https://github.com/Genymobile/scrcpy/issues/951>
Expose an option to configure how key/text events are forwarded to the
Android device.
Enabling the option avoids issues when combining multiple keys to enter
special characters, but breaks the expected behavior of alpha keys in
games (typically WASD).
Fixes <https://github.com/Genymobile/scrcpy/issues/650>
To packetize the H.264 raw stream, av_parser_parse2() (called by
av_read_frame()) knows that it has received a full frame only after it
has received some data for the next frame. As a consequence, the client
always waited until the next frame before sending the current frame to
the decoder!
On the device side, we know packets boundaries. To reduce latency,
make the device always transmit the "frame meta" to packetize the stream
manually (it was already implemented to send PTS, but only enabled on
recording).
On the client side, replace av_read_frame() by manual packetizing and
parsing.
<https://stackoverflow.com/questions/50682518/replacing-av-read-frame-to-reduce-delay>
<https://trac.ffmpeg.org/ticket/3354>
The FPS counter was called only on new frames, so it could not print
values regularly, especially when there are very few FPS (when the
device surface does not change).
To the extreme, it was never able to display 0 fps.
Add a separate thread to print framerate every second.
The socket used the device-to-computer direction to stream the video and
the computer-to-device direction to send control events.
Some features, like copy-paste from device to computer, require to send
non-video data from the device to the computer.
To make them possible, use two sockets:
- one for streaming the video from the device to the client;
- one for control/events in both directions.
The cleanup is not linear: for example, the server must be stopped and
its sockets must be shutdown after the stream and controller are stopped
(so that they don't continue processing garbage), but before they are
joined, to avoid a deadlock if they are blocked on a socket read.
Simplify the spaghetti-cleanup by keeping trace of initialization at
runtime.
Compositor bypass is meant for fullscreen games consuming lots of GPU
resources. For a light app that will usually be windowed, this only
causes unnecessary compositor suspends, especially visible (and
annoying) with complying window manager like KWin.
Signed-off-by: Romain Vimont <rom@rom1v.com>
Replace the "global" control flag in the input_manager by a function
parameter to make explicit that the behavior depends whether
--no-control has been set.
The SDL video subsystem is not necessary if we don't display the video.
Move the sdl_init_and_configure() function from screen.c to scrcpy.c,
because it is not only related to the screen display.
The description of scrcpy is "Display and control your Android device".
We want an option to disable display, another one to disable control.
For naming consistency, name it --no-display.
Also change the shortname to -N, so that we can use -n for --no-control
later.
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)
The decoder initially read from the socket, decoded the video and sent
the decoded frames to the screen:
+---------+ +----------+
socket ---> | decoder | ---> | screen |
+---------+ +----------+
The design was simple, but the decoder had several responsabilities.
Then we added the recording feature, so we added a recorder, which
reused the packets received from the socket managed by the decoder:
+----------+
---> | screen |
+---------+ / +----------+
socket ---> | decoder | ----
+---------+ \ +----------+
---> | recorder |
+----------+
This lack of separation of concerns now have concrete implications: we
could not (properly) disable the decoder/display to only record the
video.
Therefore, split the decoder to extract the stream:
+----------+ +----------+
---> | decoder | ---> | screen |
+---------+ / +----------+ +----------+
socket ---> | stream | ----
+---------+ \ +----------+
---> | recorder |
+----------+
This will allow to record the stream without decoding the video.
The order of cleanup was not the reverse as the initialization order. As
a consequence, recorder_destroy() could theoretically be called even if
recorder_init() failed.
Implement recording to Matroska files.
The format to use is determined by the option -F/--record-format if set,
or by the file extension (".mp4" or ".mkv").
It is very convenient when I play mobile game and watch video at the
same time.
Tested on Linux mint Cinnamon as well as Windows 10.
Signed-off-by: Yu-Chen Lin <npes87184@gmail.com>
This partially reverts commit f00c6c5b13.
On Ctrl+C, we need to execute cleanup code. For instance, if recording
is enabled, we need to write MP4 file trailer on exit.
Custom SDL signal handlers were disabled because it leaded to process
hanging on Ctrl+C during network calls on initialization, but now it
seems to work correctly, the network calls return immediately on signal.
In practice, proc_show_touches may not be used uninitialized, since it
checks the flag options->show_touches, but the compiler can't know that,
so initialize it to avoid the warning.
Enabling "show touches" involves the execution of an adb command, which
takes some time.
In order to parallelize, execute the command as soon as possible, but
reap the process only once everything is initialized.
If --show-touches is set, then the option must be disabled on quit.
Since it executes an adb command, it takes some time, so close the
window beforehand so that the close window button does not seem
unresponsive.
Request SDL not to replace the SIGINT and SIGTERM handlers, so that the
process is immediately terminated on Ctrl+C.
This avoids process hanging on Ctrl+C during network calls on
initialization.
Some of them accepted a timeout, but it was not used since
commit 9b056f5091 anymore.
On Windows and MacOS, resizing blocks the event loop, so resizing events
are not triggered:
- <https://bugzilla.libsdl.org/show_bug.cgi?id=2077>
- <https://stackoverflow.com/a/40693139/1987178>
As a workaround, register an event watcher to render the screen from
another thread.
Since the whole event loop is blocked during resizing, the screen
content is not refreshed (on Windows and MacOS) until resizing ends.
The serial is needed for many server actions, but this is an
implementation detail, so the caller should not have to provide it on
every call.
Instead, store the serial in the server instance on server_start().
This paves the way to implement the "adb forward" fallback properly.
SDL_net is not very suitable for scrcpy.
For example, SDLNet_TCP_Accept() is non-blocking, so we have to wrap it
by calling many SDL_Net-specific functions to make it blocking.
But above all, SDLNet_TCP_Open() is a server socket only when no IP is
provided; otherwise, it's a client socket. Therefore, it is not possible
to create a server socket bound to localhost, so it accepts connections
from anywhere.
This is a problem for scrcpy, because on start, the application listens
for nearly 1 second until it accepts the first connection, supposedly
from the device. If someone on the local network manages to connect to
the server socket first, then they can stream arbitrary H.264 video.
This may be troublesome, for example during a public presentation ;-)
Provide our own simplified API (net.h) instead, implemented for the
different platforms.
The "screen control" handled user input, which happened to be only
used to control the screen.
The controller and screen were passed to every function. Instead, group
them in a struct input_manager.
The purpose is to add a new shortcut to enable/disable FPS counter. This
feature is not related to "screen control", and will require access to
the "frames" instance.
On startup, the client has to:
1. listen on a port
2. push and start the server to the device
3. wait for the server to connect (accept)
4. read device name and size
5. initialize SDL
6. initialize the window and renderer
7. show the window
From the execution of the app_process command to start the server on the
device, to the execution of the java main method, it takes ~800ms. As a
consequence, step 3 also takes ~800ms on the client.
Once complete, the client initializes SDL, which takes ~500ms.
These two expensive actions are executed sequentially:
HOST DEVICE
listen on port | |
push/start the server |----------------->|| app_process loads the jar
accept the connection . ^ ||
. | ||
. | WASTE ||
. | OF ||
. | TIME ||
. | ||
. | ||
. v X execution of our java main
connection accepted |<-----------------| connect to the host
init SDL || |
|| ,----------------| send frames
|| |,---------------|
|| ||,--------------|
|| |||,-------------|
|| ||||,------------|
init window/renderer | |||||,-----------|
display frames |<++++++-----------|
(many frames skipped)
The rationale for step 3 occuring before step 5 is that initializing
SDL replaces the SIGTERM handler to receive the event in the event loop,
so pressing Ctrl+C during step 5 would not work (since it blocks the
event loop).
But this is not so important; let's parallelize the SDL initialization
with the app_process execution (we'll just add a timeout to the
connection):
HOST DEVICE
listen on port | |
push/start the server |----------------->||app_process loads the jar
init SDL || ||
|| ||
|| ||
|| ||
|| ||
|| ||
accept the connection . ||
. X execution of our java main
connection accepted |<-----------------| connect to the host
init window/renderer | |
display frames |<-----------------| send frames
|<-----------------|
In addition, show the window only once the first frame is available to
avoid flickering (opening a black window for 100~200ms).
Note: the window and renderer are initialized after the connection is
accepted because they use the device information received from the
device.
SDLNet_TCP_Close() not only closes, but also release the resources.
Therefore, we must not close the socket if another thread attempts to
read it.
For that purpose, move socket closing from server_stop() to
server_destroy().
Replace screen_update() by a higher-level screen_update_frame() handling
the whole frame updating, so that scrcpy.c just call it without managing
implementation details.
Expose frames_offer_decoded_frame() and frames_consume_rendered_frame()
so that callers are not exposed to frame swapping (between the decoding
and rendering frames) details.