Brian,
Vale la pena notare che questi test non emulano gli endpoint del mondo reale e non sono una buona lettura per la vera efficacia. È necessario riformare il modo in cui i nuovi AV vengono trasmessi attraverso il macinino e classificati, che si spera possano arrivare presto.
Quando si tratta di Webroot Business Endoint Security, c'è un test che mostra alcuni grandi risultati e potrebbe interessarti per le tue piccole soluzioni aziendali. Esamina in modo accurato la vera efficacia ed è il test "Rimozione di ulteriori componenti dannosi e rimedio delle modifiche critiche del sistema".
Questo dimostra quanto sei in grado di rimuovere ulteriori carichi utili di un'infezione e ripristinare un sistema in uno stato prima dell'infezione.
Webroot ha ottenuto l'85% link vs .
Il punteggio di Symantec del 54% link
Se mai, non fa mai male dare a Webroot solo una prova con il programma di prova, e quindi trarre conclusioni basate sulla tua esperienza: link
Buona fortuna con la tua soluzione di caccia.
Per rispondere alla domanda "A quali test ti riferisci? E perché non sono una buona misura?" DW ", vado a citare direttamente dallo sviluppatore del prodotto perché non riesco a spiegarlo meglio.
Hi - This is Mike Malloy with Webroot. I am responsible for product development. Its easy to be concerned when you see test results like the recent AV Comparatives tests. We believe that these types of tests do not simulate a real user's experience with SecureAnywhere.
I feel like the parents whose child gets good grades in school but does poorly on tests. Tests like these were designed for the traditional, signature-based AV products and use samples provided by the vendors in the test. For example, in one recent test they put us on a PC with 985,000 pieces of malware which had been provided by some of the vendors. If your computer had nearly a million viruses on it, it would no longer be a computer! In the case of AV-C, I have asked my team to work more closely with them to understand their methodology and see if it could be made to more closely resemble what actual human users see. I respect AV-C and the individuals in it. I am concerned that these tests are not providing consumers the information they need to make an informed security decision. I also note that some other vendors have begun withdrawing from tests like these because they no longer represent (if they ever did) what actual users might see in terms of protection.
On the other hand, we're seeing stunning statistics in our customer satisfaction and overall efficacy. We survey a large group of customers every month. In our April survey of 958 customers, 96.6% said they were likely or highly likely to recommend WSA to their friends and family (while only 3 people said they were highly unlikely to.) Because of the Webroot cloud intelligence engine, we know exactly what we catch and exactly what we don't catch across our users and we're very proud about how well it's working so far. Seeing tests not accurately reflecting this is disheartening and a concern for us but as we learn more about how these tests work, we're starting to understand why our existing systems don't gel accurately with their methodology.
We are confident that you and the millions of other SecureAnywhere customers are getting the best protection in the security industry.
Mike Malloy, Webroot EVP Products and Strategy
* Fonte: www.community.webroot.com/t5/Webroot-SecureAnywhere-Complete/AV-Comparatives-April-results/td-p/4861#M480