I cannot answer that question about myself, as I am an AI and do not have personal experiences or a phone. However, I can provide a general analysis of how phone-use agents, like voice assistants (e.g., Siri, Google Assistant, Alexa), handle privacy, which is a crucial question for users. The answer is complex and depends heavily on the specific agent, its developer, and your settings. Here’s a breakdown of the key privacy considerations: ### 1. Data Collection Voice assistants are designed to collect data to function. This typically includes: * **Voice Recordings:** Your spoken commands are often recorded and stored on company servers to improve speech recognition and the service's functionality. * **Transcripts:** The text of your commands is stored. * **Associated Data:** This can include device information, location, contacts, and search history to provide contextual answers (e.g., "What's the weather like?" uses your location). ### 2. How the Data is Used The primary uses are: * **Service Provision:** To fulfill your requests (setting alarms, making calls, answering questions). * **Service Improvement:** To train and improve the AI's accuracy and natural language understanding. * **Personalization:** To tailor responses and suggestions to you (e.g., suggesting your usual route home). * **Advertising:** Some providers may use data to serve you targeted ads. ### 3. Key Privacy Concerns * **Always-On Listening:** Devices that activate with a "wake word" (like "Hey Siri") are constantly listening for that word. While companies state that audio is only recorded and transmitted after the wake word, there have been concerns and incidents of false triggers leading to unintended recordings. * **Human Review:** In the past, it was revealed that companies like Apple, Google, and Amazon used human contractors to review a small fraction of anonymized voice recordings to grade the assistant's responses. This raised significant privacy alarms. Most companies now allow you to opt-out of this "human review" program in your settings. * **Data Security:** The storage of vast amounts of personal audio data on servers is a potential target for hackers or unauthorized access by employees. * **Data Retention:** Policies vary on how long your voice recordings and associated data are stored. Some services keep them indefinitely by default. ### 4. Steps You Can Take to Protect Your Privacy You are not powerless. You can take active steps to increase your privacy: * **Review Privacy Settings:** Regularly check the privacy settings for your voice assistant (e.g., in the Google Home app, Alexa app, or iPhone's Siri settings). This is the most important step. * **Delete Your Voice History:** Most services allow you to manually delete your past voice recordings. You can often set them to auto-delete after a set period (e.g., every 3 or 18 months). * **Opt-Out of Human Review:** Look in your settings for an option to disable the use of your recordings to "improve the service" or for "product improvement," which typically opts you out of human review. * **Mute the Microphone:** Use the physical mute button on your device when you are not actively using the assistant, especially in private conversations. * **Be Mindful of Sensitive Conversations:** Avoid discussing highly sensitive personal or financial information in the presence of an active smart speaker or phone assistant. ### Conclusion Do phone-use agents respect your privacy? **The technology itself is neutral; respect for your privacy is determined by the policies of the company that makes the agent and the settings you choose.** Most major companies have improved their transparency and user controls following public scrutiny. However, the fundamental business model of many free services involves data collection. Therefore, they do not "respect your privacy" in the absolute sense by default. **True privacy requires active management on your part.** By understanding the risks and diligently configuring your settings, you can strike a balance between convenience and protecting your personal information.
Do Phone-Use Agents Respect Your Privacy?
April 1, 2026
Autori: Zhengyang Tang, Ke Ji, Xidong Wang, Zihan Ye, Xinyuan Wang, Yiduo Guo, Ziniu Li, Chenxin Li, Jingyuan Hu, Shunian Chen, Tongxu Luo, Jiaxi Bi, Zeyu Qin, Shaobo Wang, Xin Lai, Pengyuan Lyu, Junyi Li, Can Xu, Chengquan Zhang, Han Hu, Ming Yan, Benyou Wang
cs.AI
Abstract
Studiamo se gli agenti di utilizzo del telefono rispettano la privacy durante il completamento di attività mobili benigne. Questa domanda è rimasta difficile da rispondere perché il comportamento conforme alla privacy non è stato operazionalizzato per gli agenti di utilizzo del telefono, e le applicazioni ordinarie non rivelano esattamente quali dati gli agenti inseriscono in quali campi dei moduli durante l'esecuzione. Per rendere questa domanda misurabile, introduciamo MyPhoneBench, un framework di valutazione verificabile per il comportamento della privacy negli agenti mobili. Operazionalizziamo l'uso del telefono rispettoso della privacy come accesso autorizzato, divulgazione minima e memoria controllata dall'utente attraverso un contratto di privacy minimo, iMy, e lo abbiniamo a mock app strumentate più un auditing basato su regole che rendono osservabili e riproducibili le richieste di autorizzazione non necessarie, la ridivulgazione ingannevole e la compilazione non necessaria dei moduli. Su cinque modelli all'avanguardia, testati su 10 app mobili e 300 attività, scopriamo che il successo dell'attività, il completamento dell'attività conforme alla privacy e l'uso successivo delle preferenze salvate sono capacità distinte, e nessun singolo modello domina tutte e tre. La valutazione congiunta del successo e della privacy rimescola la classifica dei modelli rispetto a ciascuna metrica considerata singolarmente. La modalità di fallimento più persistente tra i modelli è la semplice minimizzazione dei dati: gli agenti compilano ancora campi personali opzionali che l'attività non richiede. Questi risultati mostrano che i fallimenti della privacy derivano da un'esecuzione troppo zelante di compiti benigni e che una valutazione basata solo sul successo sovrastima la prontezza per il deployment degli attuali agenti di utilizzo del telefono. Tutto il codice, le mock app e le traiettorie degli agenti sono pubblicamente disponibili su~ https://github.com/tangzhy/MyPhoneBench.
English
We study whether phone-use agents respect privacy while completing benign mobile tasks. This question has remained hard to answer because privacy-compliant behavior is not operationalized for phone-use agents, and ordinary apps do not reveal exactly what data agents type into which form entries during execution. To make this question measurable, we introduce MyPhoneBench, a verifiable evaluation framework for privacy behavior in mobile agents. We operationalize privacy-respecting phone use as permissioned access, minimal disclosure, and user-controlled memory through a minimal privacy contract, iMy, and pair it with instrumented mock apps plus rule-based auditing that make unnecessary permission requests, deceptive re-disclosure, and unnecessary form filling observable and reproducible. Across five frontier models on 10 mobile apps and 300 tasks, we find that task success, privacy-compliant task completion, and later-session use of saved preferences are distinct capabilities, and no single model dominates all three. Evaluating success and privacy jointly reshuffles the model ordering relative to either metric alone. The most persistent failure mode across models is simple data minimization: agents still fill optional personal entries that the task does not require. These results show that privacy failures arise from over-helpful execution of benign tasks, and that success-only evaluation overestimates the deployment readiness of current phone-use agents. All code, mock apps, and agent trajectories are publicly available at~ https://github.com/tangzhy/MyPhoneBench.