Description
Sitemap Generator written in Go
Sitemap Generator alternatives and similar packages
Based on the "Command Line" category.
Alternatively, view Sitemap Generator alternatives based on common mentions on social networks and blogs.
-
Rich Interactive Widgets for Terminal UIs
Terminal UI library with rich, interactive widgets — written in Golang -
survey
DISCONTINUED. A golang library for building interactive and accessible prompts with full support for windows and posix terminals. -
tcell
Tcell is an alternate terminal package, similar in some ways to termbox, but better in others. -
pterm
✨ #PTerm is a modern Go module to easily beautify console output. Featuring charts, progressbars, tables, trees, text input, select menus and much more 🚀 It's completely configurable and 100% cross-platform compatible. -
cointop
DISCONTINUED. A fast and lightweight interactive terminal based UI application for tracking cryptocurrencies 🚀 -
The Platinum Searcher
A code search tool similar to ack and the_silver_searcher(ag). It supports multi platforms and multi encodings. -
asciigraph
Go package to make lightweight ASCII line graph ╭┈╯ in command line apps with no other dependencies. -
CLI Color
🎨 Terminal color rendering library, support 8/16 colors, 256 colors, RGB color rendering output, support Print/Sprintf methods, compatible with Windows. GO CLI 控制台颜色渲染工具库,支持16色,256色,RGB色彩渲染输出,使用类似于 Print/Sprintf,兼容并支持 Windows 环境的色彩渲染 -
go-size-analyzer
A tool for analyzing the size of compiled Go binaries, offering cross-platform support, detailed breakdowns, and multiple output formats.
CodeRabbit: AI Code Reviews for Developers

Do you think we are missing an alternative of Sitemap Generator or a related project?
README
Scrape 
Scrape is minimalistic depth controlled web scraping project. It can be used as command-line tool or integrate it in your project.
Scrape also supports sitemap
generation as an output.
Scrape Response
Once the Scraping is done on given URL, the API returns the following structure.
// Response holds the scrapped response
package scrape
import (
"net/url"
"regexp"
)
type Response struct {
BaseURL *url.URL // starting url at maxDepth 0
UniqueURLs map[string]int // UniqueURLs holds the map of unique urls we crawled and times each url is repeated
URLsPerDepth map[int][]*url.URL // URLsPerDepth holds urls found in each depth
SkippedURLs map[string][]string // SkippedURLs holds urls extracted from source urls but failed domainRegex (if given) and are invalid.
ErrorURLs map[string]error // errorURLs holds details as to why reason the url was not crawled
DomainRegex *regexp.Regexp // restricts crawling the urls to given domain
MaxDepth int // MaxDepth of crawl, -1 means no limit for maxDepth
Interrupted bool // true if the scrapping was interrupted
}
Command line:
Installation:
go get github.com/vedhavyas/scrape/cmd/scrape/
Available command line options:
Usage of ./scrape:
-domain-regex string(optional)
Domain regex to limit crawls to. Defaults to base url domain
-max-depth int(optional)
Max depth to Crawl (default -1)
-sitemap string(optional)
File location to write sitemap to
-url string(required)
Starting URL (default "https://vedhavyas.com")
Output
Scrape supports 2 types of output.
- Printing all the above collected data to
stdout
fromResponse
- Generating a
sitemap
xml file(if passed) from theResponse
.
As a Package
Scrape can be integrated into any Go project through the given APIs.
As a package, you will have access to the above mentioned Response
and all the data in it.
At this point, the following are the available APIs.
Start
func Start(ctx context.Context, url string) (resp *Response, err error)
Start will start the scrapping with no depth limit(-1) and base url domain
StartWithDepth
func StartWithDepth(ctx context.Context, url string, maxDepth int) (resp *Response, err error)
StartWithDepth will start the scrapping with given max depth and base url domain
StartWithDepthAndDomainRegex
func StartWithDepthAndDomainRegex(ctx context.Context, url string, maxDepth int, domainRegex string) (resp *Response, err error)
StartWithDepthAndDomainRegex will start the scrapping with max depth and regex
StartWithRegex
func StartWithDomainRegex(ctx context.Context, url, domainRegex string) (resp *Response, err error)
StartWithRegex will start the scrapping with no depth limit(-1) and regex
Sitemap
func Sitemap(resp *Response, file string) error
Sitemap generates a sitemap from the given response
Feedback and Contributions
- If you think something is missing, please feel free to raise an issue.
- If you would like to work on an open issue, feel free to announce yourself in issue's comments