Top 7 PhantomJS Alternatives Every Developer Must Know
WebscrapingAPI on Oct 31 2022
Introduced to the world in January 2011, PhantomJS quickly changed how developers worked with websites.
Vitaly Slobodin was the developer and maintainer of PhantomJS. Sadly, it was discontinued in April 2017 when he decided to step down from the position. He cited several reasons for his decision, which we’ll cover later in this blog.
Now that PhantomJS is no more a thing, you need to know about PhantomJS alternatives. We’ll get to that, too. Before that, you must understand what PhantomJS was all about.
What is PhantomJS?
PhantomJS was a headless browser generally used for web automation, i.e., automating manual tasks on the web.
Now, what is a headless browser?
A headless browser is a browser that doesn’t have a Graphical User Interface (GUI). Simply put, it is unlike Google Chrome, Safari, and Mozilla Firefox. It is controlled programmatically, without you having to open a web page you want to work on.
The reasons developers preferred headless browsers were:
- Less load on the system.
- Scraping data from websites.
- Unit testing.
Many people have always wondered why PhantomJS had to be discounted if it was this good. The answer lies in Vitaly Slobodin’s email.
See how in this email , he mentions that Chrome is faster and more stable than PhantomJS. Apart from that, he also highlights how working alone on PhantomJS is difficult.
These are some of the key reasons he had to step down.
7 Fantastic PhantomJS Alternatives
Now that you know why PhantomJS died, it is time to learn about some of its alternatives, so you can keep using headless browsers. Moreover, they have developed a lot in the last five years and offer even more functionality.
Here is our list of the 7 fantastic PhantomJS alternatives you can start using from today:
- Headless Chrome
- web scraping API 1. Headless Chrome
Headless Chrome is the number one alternative on our list because Vitaly Slobodin himself highlighted it.
This headless browser is being used by hundreds of thousands of developers regularly. The features and capabilities of PhantomJS are found in Headless Chrome.
We all know how Google Chrome is at the forefront of web browsers. Many browsers, such as Opera, Vivaldi, and Google Chrome, were built using Chromium. For people who don’t know- Chromium is an open-source browser created by Google.
Headless Chrome launched around the same time PhantomJS was discontinued. It was first introduced as a part of Chrome in the 59th version. After that, every version of Chrome has built-in Headless Chrome. Presently, Chrome is running on its 105th version, so we know it has been a while since they’ve been experimenting and improving Headless Chrome.
- Supports lots of features.
- Uses less memory.
- Debugging is easy because it is a headless browser.
- Installation is relatively quick and easy.
- Better speed and stability.
- 24x7 support.
- Regular updates.
- Headless Chrome is almost perfect, and many developers prefer it over others. 2. Selenium
Selenium was introduced to the world some 20 years back in 2002. It is similar to PhantomJS because it also automates web applications and helps in testing the various parts of a web page.
When you open the Selenium website, you see a green and white themed website with "Selenium automates browsers" written at the top. The website makes it clear from the start that the primary purpose of this browser is to automate.
When you scroll down a bit, you see three ways Selenium can help you. They are:
- Browser-based regression automation.
- Creating bug reproduction and automation scripts.
- Running tests on multiple machines simultaneously.
Selenium takes care of these three purposes through its three different services- Selenium WebDriver, Selenium IDE & Selenium Grid. Honestly, every developer will have different reasons for using headless browsers. The website does an excellent job of highlighting them at the top of the website.
Mind you; Selenium comes with its pros and cons.
- Automates browsers.
- Offers multiple services, each with its purpose.
- It's open-source, which means constant changes are made
- Setting up is easy.
- No dedicated support in case you need help.
- It doesn't support mobile applications.
CasperJS is another headless browser. The primary purpose of this browser is to navigate, script, and test web pages. CasperJS is generally used for UI testing, while other headless browsers are used for unit testing. CasperJS automates the task of filling forms, clicking links, taking screenshots, downloading resources, and many others.
- High-level third-party integration
- Learning how to use CasperJS is easy.
- Not for unit testing.
- At times, screenshots are not accurate.
- Integration is easy, as it runs on Node.js.
- Adding it to your framework is also pretty easy.
- It is blazing fast.
- Lightweight. It puts a negligible load on your machine.
- Can't take screenshots
- Documentation isn't complete.
- No support is available.
- It doesn't load many sites.
Browsersync is a headless browser, but at the same time, it is not a headless browser. Let me explain. You can use it either way; testing web pages and extracting data on a command line, or if you want a GUI for assistance, that's also possible. Browsersync gets more than 2 million downloads a month. That's a significant number, and we're sure you can guess that if so many developers trust it, they must be doing something good. Big names like Google and Adobe also use Browsersync.
- It is swift and Free.
- URLs are saved.
- Option to choose between Graphical User Interface (GUI) or Command Line (CL).
- It runs smoothly on Windows, Mac OS & Linux.
- Open-source, so it is constantly updated.
- It does not need a browser plugin.
- Flawlessly works on Desktop and Mobile devices.
- Setting Browsersync in Windows can be a bit challenging.
It is also said to work well with complex ajax libraries and supports HTTP & HTTPS protocols.
- Free and easy set up.
- Handles complex libraries effectively.
- Testing can be done using HtmlUnit.
- Information can also be retrieved from websites.
- It also works on Android.
- It offers limited features, so it is not a good option for people who want many features.
7. WebScraping API
Most of the PhantomJS alternatives in today's blog are also used for extracting data from websites. Although they do it at an average level, tools like WebScraping API takes everything to the next level.
Web scraping API is not just any web scraper tool. It is easily one of the best scraper tools as it offers so much for a small price of $49 per month. You can choose a pricing plan, which provides you with the best ROI.
Generally, the more you pay for Web scraper tools, the more features and API calls you'll get, as many of them have minor differences in features. Still, the price they charge is almost double, unlike the case when you choose WebScraping API.
10,000+ established companies rely on this tool, getting everything done without distracting busy business owners from their primary goal. Deloitte, Perrigo, and InfraWare are just some of the many names that choose WebScraping API as their go-to tool for extracting data that adds value.
The way WebScraping API works is simple. It collects HTML from any web page using a simple API and displays it to you in an easy-to-understand manner because we know that not everyone is an expert at deciphering complex data.
Many web scraper tools often get the job done initially but then get blocked from the website. This problem is taken care of when you choose WebScraping API. IP blocks and CAPTCHAs become a thing of the past when this fantastic tool is at your disposal.
- 99.99% uptime means you never have to wait to extract essential data from the website of your choice.
- Enterprise customers benefit significantly from Geotargeting, as they access more than 195 locations.
- You get constant support from the WebScrapingAPI team, meaning you never have to worry about any issues.
- Any business size can benefit from the four different plans.
- We couldn't pick a single disadvantage of using web scraping API.
Web scraping API is my top alternative
Now that you've read the blog, we know you may still be confused because choosing among so many good options is not easy. But don't worry because we have decided on the best option, so you don't have to spend your time and money.
Our ScraperAPI tool can help you retrieve data from a web page without any fuss. You can quickly and easily obtain raw HTML from any online page using our user-friendly API.
- Google Search Engine Results Scraper
You can scrape SERPs using WebScrapringAPI to find information on advertisements, organic results, maps, photos, shopping data, reviews, knowledge graphs, and more. Additionally, search results can be converted into structured JSON, CSV, or HTML data. This makes it simple to obtain the data you require, allowing you to concentrate on using it to enhance your company.
For companies and people who want to get the most of their data, WebScrapringAPI is a great tool. It is the ideal tool for extracting data from SERPs thanks to its user-friendly interface and robust functionality.
- Amazon Product Scraper
WebScrapingAPI is the ideal tool for anyone looking to gather information about Amazon products data. With this tool, you can obtain complete product details in JSON, CSV, or HTML format from all categories and nations. This information includes reviews, prices, descriptions, ASIN data, best sellers, new releases, and deals.
- 360-degree web scraping: All web scraping tasks and use cases, such as market analysis, price monitoring, information on transportation expenses, real estate, financial data, and many more, are fully supported by the Web Scraper API.
- Getting Formatted Data Out: With the help of our custom extraction rule capabilities, you can get structured JSON data based on your individual needs with just one API call. Having rapid data flow will provide your business a competitive advantage.
- Security: To find potentially dangerous information or compromised data, automated data extraction flows can be built from any website.
- Data images: Incorporate high-resolution screenshots of the pages or sections of the target website in your tools or applications. The Web Scraper API may provide screenshots, structured JSON, and raw HTML.
- Scaling for businesses: We cut down unnecessary costs by using hardware or software infrastructure. The collection of precise data at a big scale is made simple by our cloud infrastructure.
Depending on your demands, WebScrapingAP offers a variety of price options. The enterprise plan, which includes custom volume API credits, Amazon search API, product extraction API, priority email support, and a dedicated account manager, starts at $299 per month. The starter plan starts at $49 per month.
When compared to other options, WebScrapingAPI wins. Why? The tool is overstuffed. Not just tightly packed, either—filled with features that people really use. It is a platform that automates the process of extracting both structured and unstructured data from a web page, and it can be crucial for data management.
WebScrapingAPI provides mass web crawling, clean code, 99.99% uptime, the latest architecture to boost performance, a range of value-loaded plans, and the trust of 10,000+ businesses globally.
News and updates
Stay up-to-date with the latest web scraping guides and news by subscribing to our newsletter.
Maximize Twitter data with expert web scraping. Learn scraping Twitter for sentiment analysis, marketing, and business intel. Comprehensive guide using TypeScript.
This post will go over several browser automation tools and use cases. Find out how to get started with browser automation and what are the main obstacles.
Alternatives to PhantomJS
What is phantomjs and what are its top alternatives, top alternatives to phantomjs.
Selenium automates browsers. That's it! What you do with that power is entirely up to you. Primarily, it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should!) also be automated as well. ...
Protractor is an end-to-end test framework for Angular and AngularJS applications. Protractor runs tests against your application running in a real browser, interacting with it as a user would. ...
wkhtmltopdf and wkhtmltoimage are command line tools to render HTML into PDF and various image formats using the QT Webkit rendering engine. These run entirely "headless" and do not require a display or display service. ...
Puppeteer is a Node library which provides a high-level API to control headless Chrome over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome. ...
Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. ...
PhantomJS alternatives & related posts
- 173 Automates browsers
- 154 Testing
- 101 Essential tool for running test automation
- 24 Record-Playback
- 24 Remote Control
- 8 Data crawling
- 7 Supports end to end testing
- 6 Functional testing
- 6 Easy set up
- 4 The Most flexible monitoring system
- 3 End to End Testing
- 3 Easy to integrate with build tools
- 2 Comparing the performance selenium is faster than jasm
- 2 Record and playback
- 2 Compatible with Python
- 2 Easy to scale
- 2 Integration Tests
- 0 Integrated into Selenium-Jupiter framework
- 8 Flaky tests
- 4 Slow as needs to make browser (even with no gui)
- 1 Update browser drivers
related Selenium posts
With this structure, we're able to combine the automation efforts of each team member into a centralized repository while also providing new relevant metrics to business owners.
- 68 Easy to make rich cross platform desktop applications
- 52 Open source
- 13 Great looking apps such as Slack and Visual Studio Code
- 7 Because it's cross platform
- 3 Use Node.js in the Main Process
- 18 Uses a lot of memory
- 8 User experience never as good as a native app
- 4 No proper documentation
- 4 Does not native
- 1 Each app needs to install a new chromium + nodejs
- 1 Wrong reference for dom inspection
related Electron posts
The Slack desktop app was originally written us the MacGap framework, which used Apple’s WebView to host web content inside of a native app frame. As this approach continued to present product limitations, Slack decided to migrate the desktop app to Electron. Electron is a platform that combines the rendering engine from Chromium and the Node.js runtime and module system. The desktop app is written as a modern ES6 + async/await React application.
For the desktop app, Slack takes a hybrid approach, wherein some of the assets ship as part of the app, but most of their assets and code are loaded remotely.
Slack's new desktop application was launched for macOS . It was built using Electron for a faster, frameless look with a host of background improvements for a superior Slack experience. Instead of adopting a complete-in-box approach taken by other apps, Slack prefers a hybrid approach where some of the assets are loaded as part of the app, while others are made available remotely. Slack's original desktop app was written using the MacGap v1 framework using WebView to host web content within the native app frame. But it was difficult to upgrade with new features only available to Apple's WKWebView and moving to this view called for a total application rewrite.
Electron brings together Chromium's rendering engine with the Node.js runtime and module system. The new desktop app is now based on an ES6 + async/await React application is currently being moved gradually to TypeScript . Electron functions on Chromium 's multi-process model, with each Slack team signed into a separate process and memory space. It also helps prevent remote content to directly access desktop features using a feature called WebView Element which creates a fresh Chromium renderer process and assigns rendering of content for its hosting renderer. Additional security can be ensured by preventing Node.js modules from leaking into the API surface and watching out for APIs with file paths. Communication between processes on Electron is carried out via electron-remote, a pared-down, zippy version of Electron's remote module, which makes implementing the web apps UI much easier.
- 9 Easy setup
- 8 Quick tests implementation
- 5 Open source
- 5 Promise support
related Protractor posts
Currently, we are using Protractor in our project. Since Protractor isn't updated anymore, we are looking for a new tool. The strongest suggestions are WebdriverIO or Puppeteer . Please help me figure out what tool would make the transition fastest and easiest. Please note that Protractor uses its own locator system, and we want the switch to be as simple as possible. Thank you!
Protractor or Cypress for ionic-angular ?
We have a huge ionic-angular app with almost 100 pages and 10+ injectables. There are no tests written yet. Before we start, we need some suggestions about the framework. Would you suggest Cypress or Angular's Protractor with Jasmine / Karma for a heavy ionic app with Angular?
Related wkhtmltopdf posts.
related SlimerJS posts
- 10 Very well documented
- 10 Scriptable web browser
- 6 Promise based
- 10 Chrome only
related Puppeteer posts
I work in a company building web apps with AngularJS . I started using Selenium for tests automation, as I am more familiar with Python . However, I found some difficulties, like the impossibility of using IDs and fixed lists of classes, ending up with using xpaths most, which unfortunately could change with fixes and modifications in the code.
Any comments on this comparison and also on comparisons with similar tools are welcome! :)
related CasperJS posts
I wouldn't recommend it today, because PhantomJS is a basically dead project, and as a result, so is CasperJS. I expect we'll migrate to something else. We haven't in large part because 95% of our new tests are written with a simple Node.js -based unit testing framework we use that run 35K lines of unit tests covering most of our JS codebase in 3.6 seconds. And for the things where we want an integration test, CasperJS does work, and I think there's a good chance that waiting another year or two will result in our being able to switch to a much better option than what we'd get if we migrated now.
- 1.1K Great libraries
- 1K High-performance
- 804 Open source
- 485 Great for apis
- 476 Asynchronous
- 422 Great community
- 390 Great for realtime apps
- 296 Great for command line utilities
- 83 Websockets
- 82 Node Modules
- 69 Uber Simple
- 59 Great modularity
- 58 Allows us to reuse code in the frontend
- 42 Easy to start
- 35 Great for Data Streaming
- 32 Realtime
- 25 Non blocking IO
- 18 Can be used as a proxy
- 17 High performance, open source, scalable
- 16 Non-blocking and modular
- 15 Easy and Fun
- 14 Easy and powerful
- 13 Same lang as AngularJS
- 13 Future of BackEnd
- 12 Fullstack
- 10 Cross platform
- 10 Scalability
- 8 Mean Stack
- 7 Easy concurrency
- 7 Great for webapps
- 6 Fast, simple code and async
- 6 Typescript
- 5 Its amazingly fast and scalable
- 5 Great speed
- 5 Easy to use and fast and goes well with JSONdb's
- 5 Control everything
- 5 Fast development
- 4 Easy to use
- 4 It's fast
- 4 Isomorphic coolness
- 3 One language, end-to-end
- 3 Scales, fast, simple, great community, npm, express
- 3 TypeScript Support
- 3 Sooper easy for the Backend connectivity
- 3 Not Python
- 3 Great community
- 3 Easy to learn
- 3 Less boilerplate code
- 3 Performant and fast prototyping
- 3 Blazing fast
- 2 Event Driven
- 2 Npm i ape-updating
- 1 Creat for apis
- 46 Bound to a single CPU
- 45 New framework every day
- 40 Lots of terrible examples on the internet
- 33 Asynchronous programming is the worst
- 24 Callback
- 11 Dependency based on GitHub
- 11 Dependency hell
- 10 Low computational power
- 7 Can block whole server easily
- 7 Callback functions may not fire on expected sequence
- 7 Very very Slow
- 4 Breaking updates
- 3 No standard approach
- 3 Unneeded over complication
- 1 Can't read server session
- 1 Bad transitive dependency management
related Node.js posts
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP ) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo . And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
(GitHub Pages : https://www.jaegertracing.io/ , GitHub: https://github.com/jaegertracing/jaeger )
Replacing PhantomJS with headless Chrome
By Navaneeth PK
on January 22, 2019
We recently replaced PhantomJS with ChromeDriver for system tests in a project since PhantomJS is no longer maintained . Many modern browser features required workarounds and hacks to work on PhantomJS. For example the Element.trigger('click') method does not actually click an element but simulates a DOM click event. These workarounds meant that code was not being tested as the code would behave in real production environment.
ChromeDriver Installation & Configuration
ChromeDriver is needed to use Chrome as the browser for system tests. It can be installed on macOS using homebrew .
Remove poltergeist from Gemfile and add selenium-webdriver .
Configure Capybara to use ChromeDriver by adding following snippet.
Above code would run tests in headless mode by default. For debugging purpose we would like to see the actual browser. That can be easily done by executing following command.
After switching from Phantom.js to "headless chrome", we ran into many test failures due to the differences in implementation of Capybara API when using ChromeDriver. Here are solutions to some of the issues we faced.
1. Element.trigger('click') does not exist
2. Element is not visible to click
When we switched to Element.click , some tests were failing because the element was not visible as it was behind another element. The easiest solution to fix these failing test was using Element.send_keys(:return) but purpose of the test is to simulate a real user clicking the element. So we had to make sure the element is visible. We fixed the UI issues which prevented the element from being visible.
3. Setting value of hidden fields do not work
When we try to set the value of a hidden input field using the set method of an element, Capybara throws a element not interactable error.
4. Element.visible? returns false if the element is empty
ignore_hidden_elements option of Capybara is false by default. If ignore_hidden_elements is true , Capybara will find elements which are only visible on the page. Let's say we have <div class="empty-element"></div> on our page. find(".empty-element").visible? returns false because selenium considers empty elements as invisible. This issue can be resolved by using visible: :any .
If you liked this blog, you might also like the other blogs we have written. Check out the full archive.
Rails 7.1 adds support for multi-column ordering in ActiveRecord::Batches
October 3, 2023
Rails 7.1 adds *_deliver callbacks to Action Mailer
September 26, 2023
Rails 7.1 enables detailed query plan analysis with options in ActiveRecord::Relation#explain
July 4, 2023
Stay up to date with our blogs. Sign up for our newsletter.
We write about Ruby on Rails, React.js, React Native, remote work, open source, engineering & design.
Replacing PhantomJS with Headless Chrome in your Selenium testing stack
PhantomJS had one great advantage – it could burrow through authenticated proxies. That made it possible to combine with WonderProxy and see the screen rendering in Saudi Arabia , the Netherlands , Montreal , or 252 other cities in 87 other countries.
Strictly speaking PhantomJS is the browser-simulator, using WebDriver to drive the browser. Today that functionality exists in Headless Chrome , a faster, better maintained, and higher fidelity browser engine. The only advantage PhantomJS still has is its native support for authenticated proxies.
In this article, we'll replace PhantomJS with Headless Chrome. The examples will be in Node.js for how to install Selenium with Chrome and its dependencies. Projects in other languages that run WebDriver will need to Install WebDriver for Chrome in their own language, then pay close attention to the configuration sections, writing code in the local language. WonderProxy is developing Selenium HOWTOs by language; the Ruby-Selenium example is already up today.
Set up the Node.js project
The script below shows the version of Node.js. The selenium-webdriver npm package supports the stable and LTS releases of Node.js , so as of September 2021, that's versions 14.17 and 16.9. When npm init runs, select "Mocha" as the test runner, as the examples are in Mocha. If you're adding Selenium to an existing Node.js project, just continue testing in whatever you were using before, substituting out the driver name and adding the proxy code. In the code below, the npx gitignore command creates a .gitignore file for temporary files, while npm i -D installs WebDriver, the test runner, the assertion framework, and proxy-chain. proxy-chain is the package we'll use to support the authenticated proxies that WonderProxy provides, since Headless Chrome doesn't support authentication itself. (Do not cut and paste this code. Instead run each command separately, as npm init will ask for input and may take the call to npx as the input.)
Configure WonderProxy credentials
Before getting started, you'll need a WonderProxy account, which is available as a free trial . Once you have a username and password, store those in environment variables.
Your tests will reference those environmental variables when they connect to WonderProxy servers. The easiest way to do this, of course, is to hard-code the username and password right into the source code. We do not recommend this, since it gives any person or process that can read your code access to your credentials. Most Continuous Integration tools, such as CircleCI, have methods to track environment variables and keep them secure .
Install Headless Chrome
ChromeDriver is a tool that accepts connections running over the W3C WebDriver Protocol , and runs a local browser that follows those commands. This is incredibly powerful, as it makes moving to the cloud or another server as easy as changing the connection string from localhost to a remote machine. Check your browser's version with Chrome → About , then download the appropriate ChromeDriver off the download page . Install it in a location known to your PATH environment variable, or put it in the same place that the test runner will run from.
Note that ChromeDriver is an executable file you downloaded from the internet. As of September 2021, the file is unsigned by MacOS and the operating system may refuse to let you run it. This support post describes how to approve the program on MacOS.
Now we're ready to change the code.
Replace WebDriver and configure proxy
First, import the Chrome WebDriver, which is referenced on line 6.
Second, configure Selenium to use Chrome with the forBrowser() method.
Third, provide the options to run through a proxy. The function runWithProxiedDriver() does this. The example below uses ChromeDriver's Options class, which is available in every implementation including Java, Python, and Ruby. Earlier we did an npm install for proxy-chain ; that will be the technology used to push Chrome through the proxy.
Everything just works!
Here's the sample code running as a test from the command line.
This example worked through replacing PhantomJS/Selenium with Headless Chrome/Selenium, but it also provided a real working example in Node. Whether you are converting old code or starting new, WonderProxy, Selenium, and Headless Chrome have you covered.
Among with David Hoppe
The managing director of Excelon Development, Matt Heusser writes and consults on software delivery with a focus on quality.
Subscribe to our newsletter
We'll use your email address to send you newsletters, blog posts and product updates. You can unsubscribe any time.
Recommended for you
Localizing your first test in playwright and python, playwright vs puppeteer: a tester's perspective, automating your first test with selenium and ruby.
- Where’s It Up?
- Global Ping Statistics
- Where’s It Fast?
- Why WonderProxy?
- For developers & QA
- For marketers
- What is localization testing?
- What is geoIP?
- Browser extension
- Residential IPs
- Enterprise proxies
- Server status
- Network updates
Posted on Feb 23, 2020
Moving your Rails test suite from PhantomJS to Headless Chrome
This post was originally written in 2018. Some of the details may have changed since then, but I didn't want this information to go to waste, so take from what it what you will.
That's why it was super exciting to hear that Chrome was shipping a fully-functional headless mode , because as great as PhantomJS is, it was severely lacking in many modern browser features and standards*. With Chrome we could now more accurately test to reflect what our users were actually seeing, and not have to worry about polyfills and vendor prefixes just to satisfy the test suite.
But the fact that you're here probably means you already knew that. So I'm going to break down for you how this Rails app migrated its test suite from PhantomJS to headless Chrome.
* But we still greatly appreciate the effort and dedication that went in to maintaining PhantomJS. Thank you Vitaly and others!
Before you jump in I'd recommend getting familiar with Chrome from the command line. Here's a good primer.
It goes without saying that, like any app upgrade, you should have a green test suite before starting. If you know you've got some particularly intricate tests, make a note to ensure they're still behaving the way they should afterward.
If you're using CI, ensure it supports ChromeDriver. This Rails app uses Semaphore , which provides support out of the box.
You'll also need ChromeDriver on your machine to run tests locally. Installation comes in a couple flavours:
Finally, your Rails app is going to need Selenium and the ChromeDriver helper. Update your Gemfile's test group accordingly:
Getting behind the wheel
Now that you have the prerequisites you can start to swap out your Poltergeist drivers with Selenium ones. Selenium with Chrome accepts quite a number of arguments, so I'll break down a few key ones:
- headless - this is an obvious one, in that it enables Chrome to operate in headless mode. However, since we have a full-blown Chrome browser at our disposal it might be wise to create a driver that doesn't include this argument, for cases where you want to watch things happen.
- window-size - this might also seem obvious, but I would always advise explicitly setting the window dimensions. Otherwise you might do what I did and assume it defaulted to some large desktop size and spend hours wondering why certain elements weren't visible.
- blink-settings=imagesEnabled=false - indeed, this takes us in to Chrome's rendering engine, Blink, and disables image rendering. This helps with page speed.
- disable-gpu - while this argument was once required across the board, it is now only required to fix bugs present on Windows systems.
With these in mind, let's register a driver:
You'll notice that we've started with an array of standard arguments, and then added a few more inside the driver registration. That's because our app actually registers a handful of other drivers and we want them all to inherit the same same standard set of options. By no means do you need to use any or all of these arguments; go ahead and experiment with what works best for your setup!
I'll note one other neat feature Chrome introduces: device emulation. Previously if you wanted to imitate a mobile device, you might do something like:
Now you can simply register a driver using the add_emulation option:
Here's the [gross looking] full list of devices.
You can even get more specific about the characteristics of your emulation, with arguments like pixelRatio and touch . Read more about it here .
Lastly, make sure your tests use the drivers:
What's different in Chrome?
El.trigger does not exist.
We've all been there. Your test can't find an element that you're trying to .click .
Maybe it's something funky with your CSS, or maybe PhantomJS isn't rendering it correctly. A common workaround was to trigger(:click) . Not anymore! Selenium doesn't implement the .trigger method.
Thankfully, since Chrome is much more up to date and closer to rendering what the actual experience will look like, you should be able to reliably .click your elements. If you're struggling to figure out why something isn't receiving an event this is a good time to bust out your GUI Chrome driver to see exactly what's going on.
You'll need to accept alerts yourself
While before you could rely on PhantomJS to automatically accept confirm dialogues, with Chrome you need to do it yourself. It's as simple as:
Resizing the window has changed
If you're resizing the window you're going to need to update how it's called:
Keyboard events have changed
If you're sending keyboard events to an element using el.native.send_keys , the main differences in Chrome are that
- The element you're sending keys to needs to be focusable
- If you're sending a symbol, things are slightly but annoyingly different. For example, :Right is now :right
fill_in doesn't fire a change event
Both of those solutions are ripe to be turned in to helper methods as well.
You'll need to dig deeper for logs
PhantomJS would automatically print console logs to the terminal. With Chrome you're going to need to do a little more work.
First, when setting up your driver make sure you're setting up the loggingPrefs capability like we did above.
Then, if you need to see a console output you can access it like so:
But wait! There's more:
We're using Chrome, so let's take full advantage of it and just use DevTools!
You may have seen the driver argument --remote-debugging-port=9222 above. By using that any time you're running your tests you can then head over to http://localhost:9222 and inspect elements, view the console, and pretty much everything else you would expect from DevTools. 😍
- Ever used save_and_open_page ? The cool kids are just using the GUI Chrome driver. If you need to pause execution, just throw a byebug in there and you'll then be able to play around in the Chrome browser.
- If you need to clear your browser cache on the fly, here's a handy method:
Your mileage may vary
While these steps should get you most of the way there, I'm willing to bet that you're still going to have some test failures. That's okay. Spend some time with it, because it's absolutely worth it.
I'll note that most of what I uncovered in migrating this Rails app's test suite to Chrome was cobbled together from a variety of resources, so I thank these pioneers greatly, and I highly recommended checking out their posts for further reading:
- How GitLab switched to Headless Chrome for testing
- Gist: translating old capybara selenium/chrome preferences and switches
- Moving From PhantomJS to Headless Chrome in Rails: Fewer Hacks, Simpler Debugging and More Consistent Tests
- Switching to Headless Chrome for Rails System Tests
- Class: Selenium::WebDriver::Chrome::Options
I hope that this breakdown helps you on your way to better client-side testing, because it's certainly helped us. If you've got tips or feedback based on your own experience, please do leave a comment below.
Top comments (0)
Templates let you quickly answer FAQs or store snippets for re-use.
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .
Hide child comments as well
For further actions, you may consider blocking this person and/or reporting abuse
Testing AWS boto code with moto
Chris White - Jun 27
Enhancing Bug Reports: How to Capture Full-Page Screenshots in different Browsers
Natalia Demianenko - Jun 26
Extracting Class Methods: How To Derive an Interface From a Class
Chris Cook - Jun 25
API Testing Using Postman For Beginners
Maddy - Jun 26
Once suspended, jody will not be able to comment or publish posts until their suspension is removed.
Once unsuspended, jody will be able to comment and publish posts again.
Once unpublished, all posts by jody will become hidden and only accessible to themselves.
If jody is not suspended, they can still re-publish their posts from their dashboard.
Once unpublished, this post will become invisible to the public and only accessible to Jody Heavener.
They can still re-publish the post if they are not suspended.
Thanks for keeping DEV Community safe. Here is what you can do to flag jody:
jody consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy.
Unflagging jody will restore default visibility to their posts.
We're a place where coders share, stay up-to-date and grow their careers.
PhantomJS is a headless browser that works hand in hand with Selenium to help developers more efficiently test their sites and apps.
Alternatives to Phantomjs
- Top Alternatives
- Suggest New
iMacros allow you to record the most tedious and common actions you take on your browser and automate them to simplify the routine of your life.
Selenium automates browsers, saving developers and designers painstaking time and money when they're looking to test out the compatibility of their websites.
New way of writing native applications using web technologies: HTML5, CSS3, and WebGL.
Built on the top of Selenium and Appium, Katalon Studio is a free and powerful automated testing tool for web testing, mobile testing, and API testing.
CloudQA offers Web Automation Tools and automated web application monitoring solution. It is a QA automation tool better than other application testing tool.
Update available - click here to reload.
Ghostlab allows you to test out a newly developed website on a variety of browsers and mobile devices at the same time. To get started, simply drag the web address to the Ghostlab system and press ...
Missing a software in the list? We are always happy if you help us making our site even better.
We have 1 review for Phantomjs. The average overall ratings is 4.0 / 5 stars.
Pros: High level of compatibility with a number of debugging tools Supported by an enthusiastic development community
Cons: Requires an exhaustive setup process Browser sometimes shuts down unexpectedly
About This Article
This page was composed by Alternative.me and published by Alternative.me . It was created at 2018-04-28 19:12:28 and last edited by Alternative.me at 2020-03-06 07:51:06. This page has been viewed 8332 times.
More Software in Desktop Software > Development
More Popular Desktop Software
Table of Contents
Filter platforms, filter features.
Search code, repositories, users, issues, pull requests...
We read every piece of feedback, and take your input very seriously.
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
Scriptable Headless Browser
Name already in use.
Use Git or checkout with SVN using the web URL.
Work fast with our official CLI. Learn more about the CLI .
- Open with GitHub Desktop
- Download ZIP
Sign In Required
Please sign in to use Codespaces.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
If nothing happens, download Xcode and try again.
Launching Visual Studio Code
Your codespace will open once ready.
There was a problem preparing your codespace, please try again.
- 1,958 commits
PhantomJS - Scriptable Headless WebKit
Important : PhantomJS development is suspended until further notice (see #15344 for more details).
- Headless web testing . Lightning-fast testing without the browser is now possible!
- Page automation . Access and manipulate web pages with the standard DOM API, or with usual libraries like jQuery.
- Screen capture . Programmatically capture web contents , including CSS, SVG and Canvas. Build server-side web graphics apps, from a screenshot service to a vector chart rasterizer.
- Network monitoring . Automate performance analysis, track page loading and export as standard HAR format.
- Multiplatform , available on major operating systems: Windows, Mac OS X, Linux, and other Unices.
- Pure headless (no X11) on Linux , ideal for continuous integration systems. Also runs on Amazon EC2, Heroku, and Iron.io.
- Easy to install : Download , unpack, and start having fun in just 5 minutes.
- Explore the complete documentation .
- Read tons of user articles on using PhantomJS.
- Join the mailing-list and discuss with other PhantomJS fans.
PhantomJS is free software/open source, and is distributed under the BSD license . It contains third-party code, see the included third-party.txt file for the license information on third-party code.
PhantomJS is created and maintained by @ariyahidayat , with the help of many contributors .
- Python 5.8%
You are using an outdated browser. Please upgrade your browser to improve your experience.
PhantomJS is dead, long live headless browsers
In April 2017, Vitaly Slobodin announced, that he’s stepping down as a developer and maintainer of PhantomJS , the headless WebKit browser. This is mainly due to the fact that Google introduced Headless Chrome with Chrome 59. And since version 55, Firefox also provides a headless mode .
There are several reasons to favor headless Chrome/Firefox over PhantomJS:
- They are real browsers with a broad feature support (PhantomJS uses a very old version of WebKit – and in the meanwhile Chrome switched to Blink anyway)
- They are faster and more stable (PhantomJS has a lot of open issues )
- They use less memory
- They can be started non-headless, which allows easier debugging
- No more goofy PhantomJS binary installation with NPM
In the next sections I’m going to suggest a few alternatives to a PhantomJS setup and elaborate on their advantages and disadvantages.
Alternative 1: Don’t use a browser at all
The advantage is that these tests run way faster and can be executed completely within Node. This also means no special setup on the CI server is needed. The downside is that they are not executed in a real browser and you have to mock browser APIs. Additionally, if you have end-to-end tests, you are going to need a real browser setup anyway.
Alternative 2: Use headless Chrome (or Firefox)
In a more conventional setup with Karma , the switch from PhantomJS to Chrome is quite easy. Instead of the karma-phantomjs-launcher , you install the karma-chrome-launcher and configure Karma accordingly in your karma.conf.js :
This will open a Chrome window and execute the tests within the browser. Chances are, you are already using this setup for local debugging.
The karma-chrome-launcher also supports a headless preset which makes working with Headless Chrome dead simple. You only have to change the preset:
The launcher assumes that the Chrome binary is available on the system (if in an exotic location, you can provide a CHROME_BIN environment variable). The launcher supports Chromium as well with the Chromium and ChromiumHeadless presets (for the latter, make sure you have version >= 2.2.0).
So far so good, but what about running the tests on a CI server? For Travis, there is a Chrome addon that can be included. And Jenkins? You probably don’t want to install Chrome/Chromium (and it’s dependencies) on every slave. Furthermore, you cannot just install Chrome/Chromium via NPM 1 or download and unpack it 2 since you’d still need to install all the libraries it is dynamically linked to.
1 yes, there are some shady packages you shouldn’t trust 2 although puppeteer does exactly this
Alternative 3: Use a cloud service like Sauce Labs
With the karma-sauce-launcher , running tests with various browsers is easy (locally as well as on the CI server). You configure custom launchers for each browser type and toss in the connection credentials as environment variables. Et voilà.
Sauce Labs is a paid service .
Alternative 4: Launch Chrome in a Docker container
A very naive approch is to run Chrome in a Docker container. For this, we create a Dockerfile that installs Chromium and exposes its remote debugging port:
You can then build this image and start the container. I’ve created a script that does this, taking a URL as argument:
By using the karma-script-launcher , we can configure Karma to use this script to start Chromium. It then executes the tests with Chromium running in a Docker container:
While it is pretty promising to be able to use the same image with the exact same browser version locally, there are some issues with this method:
- Your test setup has to know about the Docker setup and has to be adapted accordingly
- On the CI server, Docker has to be installed and it must be allowed to do a docker build and docker run within the environment of the job.
- How do you ensure the image is rebuilt regularly to update to new browser versions?
- How do you handle concurrent test jobs (container name, debugging port)?
- How do you clean up containers?
Alternative 5: Dynamic Jenkins slave with the Docker Slave plugin
So when adopting Docker, why not go all the way and manage the whole Jenkins slave with Docker? This is possible with the Docker Slaves plugin . The plugin enables you to setup build agents using Docker containers by placing a Dockerfile in your source repository and set up the job to use it (any image is supported). You can also define side containers (for the database etc.) similar to docker-compose .
The advantage of this option is that your frontend test/build setup has to know nothing about Docker.
Alternative 6: Dynamic Jenkins slave on OpenShift
When working with a Kubernetes/OpenShift cluster, the Jenkins Kubernetes plugin is an interesting option.
OpenShift offers a bunch of preconfigured images that work with the Kubernetes plugin (e.g. openshift/jenkins-slave-base-centos7 ). You can use them as base image to build an image containing Chromium. Then create an OpenShift build from your Dockerfile with the oc new-build command.
Furthermore, a new pod template has to be created ( Manage Jenkins > Cloud > Kubernetes ), where the URL to the Docker image(-stream) is configured. The pod configuration options are described here .
Now create a Jenkins (Multi-)Pipeline project for your Git repository and configure the label of the template you defined above in the project’s Jenkinsfile :
What about artifacts? They have to be archived to survive a pod shutdown. Jenkins Plugins like JUnit or Cobertura already pull the concerned files out of the container and copy them onto the Jenkins master. Any other artifacts can be archived with archiveArtifact .
As you may have noticed, the custom ChromiumHeadlessNoSandbox preset is used in this example. This is due to the inability of Chrome’s sandboxing feature to work in a Docker container as-is. For our testing context we can live with disabling the sandbox with a custom laucher in karma.conf.js :
Let’s run the job! When analyzing the output, we can observe that the tests are executed in a container using headless Chromium:
Last but not least the browser has be kept up-to-date. This can be achieved by periodically rebuilding the image with another Jenkins job like this:
PhantomJS is a thing of the past, but the good news is there are compelling alternatives with the headless modes of Chrome and Firefox. Although the overall complexity may rise, especially when Docker comes into play.
Please contact us if you have questions regarding a similar scenario.
What are your experiences on the journey replacing PhantomJS?
Image credit: „Valparaíso Puerto“ by Mathis Hofer, 2010, CC BY-SA 3.0
This package has been deprecated
- 8 Dependencies
- 519 Dependents
- 81 Versions
An NPM installer for PhantomJS , headless webkit with JS API.
Pre-2.0, this package was published to NPM as phantomjs . We changed the name to phantomjs-prebuilt at the request of PhantomJS team.
Please update your package references from phantomjs to phantomjs-prebuilt
Building and Installing
Or grab the source and
What this installer is really doing is just grabbing a particular "blessed" (by this module) version of Phantom. As new versions of Phantom are released and vetted, this module will be updated accordingly.
And npm will install a link to the binary in node_modules/.bin as it is wont to do.
Running via node
The package exports a path string that contains the path to the phantomjs binary/executable.
Below is an example of using this package via node.
The major and minor number tracks the version of PhantomJS that will be installed. The patch number is incremented when there is either an installer update or a patch build of the phantom binary.
Deciding Where To Get PhantomJS
By default, this package will download phantomjs from our releases . This should work fine for most people.
Downloading from a custom URL
If github is down, or the Great Firewall is blocking github, you may need to use a different download mirror. To set a mirror, set npm config property phantomjs_cdnurl .
Alternatives include https://bitbucket.org/ariya/phantomjs/downloads (the official download site) and http://cnpmjs.org/downloads .
Or add property into your .npmrc file (https://www.npmjs.org/doc/files/npmrc.html)
Another option is to use PATH variable PHANTOMJS_CDNURL .
Using PhantomJS from disk
If you plan to install phantomjs many times on a single machine, you can install the phantomjs binary on PATH. The installer will automatically detect and use that for non-global installs.
PhantomJS needs to be compiled separately for each platform. This installer finds a prebuilt binary for your operating system, and downloads it.
If you check your dependencies into git, and work on a cross-platform team, then you need to tell NPM to rebuild any platform-specific dependencies. Run
as part of your build process. This problem is not specific to PhantomJS, and this solution will work for any NodeJS package with native or platform-specific code.
If you know in advance that you want to install PhantomJS for a specific architecture, you can set the environment variables: PHANTOMJS_PLATFORM (to set target platform) and PHANTOMJS_ARCH (to set target arch), where platform and arch are valid values for process.platform and process.arch .
A Note on PhantomJS
PhantomJS is not a library for NodeJS. It's a separate environment and code written for node is unlikely to be compatible. In particular PhantomJS does not expose a Common JS package loader.
This is an NPM wrapper and can be used to conveniently make Phantom available It is not a Node JS wrapper.
I have had reasonable experiences writing standalone Phantom scripts which I then drive from within a node program by spawning phantom in a child process.
Read the PhantomJS FAQ for more details: http://phantomjs.org/faq.html
An extra note on Linux usage, from the PhantomJS download page:
There is no requirement to install Qt, WebKit, or any other libraries. It however still relies on Fontconfig (the package fontconfig or libfontconfig, depending on the distribution).
Installation fails with spawn enoent.
This is NPM's way of telling you that it was not able to start a process. It usually means:
- node is not on your PATH, or otherwise not correctly installed.
- tar is not on your PATH. This package expects tar on your PATH on Linux-based platforms.
Check your specific error message for more information.
Installation fails with Error: EPERM or operation not permitted or permission denied
This error means that NPM was not able to install phantomjs to the file system. There are three major reasons why this could happen:
- You don't have write access to the installation directory.
- The permissions in the NPM cache got messed up, and you need to run npm cache clean to fix them.
- You have over-zealous anti-virus software installed, and it's blocking file system writes.
Installation fails with Error: read ECONNRESET or Error: connect ETIMEDOUT
This error means that something went wrong with your internet connection, and the installer was not able to download the PhantomJS binary for your platform. Please try again.
I tried again, but I get ECONNRESET or ETIMEDOUT consistently.
Do you live in China, or a country with an authoritarian government? We've seen problems where the GFW or local ISP blocks github, preventing the installer from downloading the binary.
Try visiting the download page manually. If that page is blocked, you can try using a different CDN with the PHANTOMJS_CDNURL env variable described above.
I am behind a corporate proxy that uses self-signed SSL certificates to intercept encrypted traffic.
You can tell NPM and the PhantomJS installer to skip validation of ssl keys with NPM's strict-ssl setting:
WARNING: Turning off strict-ssl leaves you vulnerable to attackers reading your encrypted traffic, so run this at your own risk!
I tried everything, but my network is b0rked. What do I do?
If you install PhantomJS manually, and put it on PATH, the installer will try to use the manually-installed binaries.
I'm on Debian or Ubuntu, and the installer failed because it couldn't find node
Some Linux distros tried to rename node to nodejs due to a package conflict. This is a non-portable change, and we do not try to support this. The official documentation recommends that you run apt-get install nodejs-legacy to symlink node to nodejs on those platforms, or many NodeJS programs won't work properly.
Questions, comments, bug reports, and pull requests are all welcome. Submit them at the project on GitHub . If you haven't contributed to an Medium project before please head over to the Open Source Project and fill out an OCLA (it should be pretty painless).
Bug reports that include steps-to-reproduce (including code) are the best. Even better, make them in the form of pull requests.
Dan Pupius ( personal website ) and Nick Santos , supported by A Medium Corporation .
Copyright 2012 A Medium Corporation .
Licensed under the Apache License, Version 2.0. See the top-level file LICENSE.txt and (http://www.apache.org/licenses/LICENSE-2.0).
npm i phantomjs
Downloads Weekly Downloads
8 years ago
New to PhantomJS? Read and study the Quick Start guide.
Download phantomjs-2.1.1-windows.zip (17.4 MB) and extract (unzip) the content.
The executable phantomjs.exe is ready to use.
Note : For this static build, the binary is self-contained with no external dependency. It will run on a fresh install of Windows Vista or later versions. There is no requirement to install Qt, WebKit, or any other libraries.
Download phantomjs-2.1.1-macosx.zip (16.4 MB) and extract (unzip) the content.
Note : For this static build, the binary is self-contained with no external dependency. It will run on a fresh install of OS X 10.7 (Lion) or later versions. There is no requirement to install Qt or any other libraries.
Download phantomjs-2.1.1-linux-x86_64.tar.bz2 (22.3 MB) and extract the content.
Note : For this static build, the binary is self-contained. There is no requirement to install Qt, WebKit, or any other libraries. It however still relies on Fontconfig (the package fontconfig or libfontconfig , depending on the distribution). The system must have GLIBCXX_3.4.9 and GLIBC_2.7.
Download phantomjs-2.1.1-linux-i686.tar.bz2 (23.0 MB) and extract the content.
Binary packages are available via pkg:
To get the source code, check the official git repository: github.com/ariya/phantomjs .
To compiled PhantomJS from source (not recommended, unless it is absolutely necessary), follow the build instructions .
To verify the integrity of the downloaded files, use the following checksums.
Sha-256 checksums, acknowledgement.
Download service is kindly provided by BitBucket and previously by Google Code Project Hosting.