Scrape Images from a Web Page
images_scrap(link, imgpath = getwd(), extn, askRobot = FALSE)
Arguments
- link
the link of the web page
- imgpath
the path of the images. Defaults to the current directory
- extn
the extension of the image: png, jpeg ...
- askRobot
logical. Should the function ask the robots.txt if we're allowed or not to scrape the web page ? Default is FALSE.
Value
called for the side effect of downloading images
Examples
if (FALSE) { # \dontrun{
images_scrap(link = "https://posit.co/", extn = "jpg")
} # }