.conf 2013 is around the corner!
Some important details:
- We’ve been working on the big party–not going to spoil the surprise, but it is going to be AWESOME!
- Answers will be represented again this year by Splunk Support at the Answers desk, stop by to get in-person support and answers to your Splunk questions!
- Your Answers karma can take you to .conf 2013…
- ..and your app can win you up to $10k in cash!
- Got a cool use case? Nominate yourself for a Splunk Revolution Award!
…And in the dorkness binds them
Our very own dbCooper has made a nifty Splunk docs/blogs/wiki/Answers search aggregator widget!
<dbCooper> pie|home: I JUST literally created http://www.google.com/cse/home?cx=016212844476379046035:slkbhudrfhu
<@Splunky> dbCooper’s URL: “Google Custom Search – splunk>Search”
<dbCooper> it sucks them all, err, searches them all.
Are you a ponydocs fan?
#splunk denizen ekristen has forked our docs system codebase and made a great, easy-to-deploy package. Check it out!
Mealtime is real-time
<@piebob> mmm, coffffeeeee
<NickK> Coffee? I’m already on my lunch!
<N00BZ> Lunch! What about 2nd breakfast?
<zxcvbnm> We haven’t even had elevenzies yet
* puercomal throws an apple
<jtrucks> so, I have finally gotten the habit of using splunk to look at my own non-work servers’ logs as a first instinct instead of rooting around in /var/log
<Baconesqu> heh, “non-work server”
<Baconesqu> I cut back on my recreational IT work a couple of years ago.
<Baconesqu> I took up cooking instead.
<Baconesqu> I probably make the best risotto of any splunk admin you know.
Ducky explains the true meaning of No Limits:
<tmichael> is there a way to temporarily override the max search results? been asked to give max count of api calls per second over past 90 days. can’t figure out how to do this cuz i keep running up against 50,000 result limit. appreciate the assist.
<Dutchy> make a file splunkhome/etc/system/local/limits.conf
<Dutchy> and add
<Dutchy> maxresultrows = 100000
<Nerf> tmichael: Do you just want a count?
<tmichael> Nerf: yeah
<tmichael> The specified span would result in too many (>50000) rows.
<tmichael> has to be more efficient way to get top second over past 90 days without hitting this limit
<duckfez> tmichael: sometimes you can beat this with clever searches. if all you care about is the heaviest second each day over the past 90 days, something like this may work … foo bar baz | bucket span=1s _time | stats count by _time | timechart span=1h max(count)
<tmichael> basically, can’t do more than 13 hours given that my buckets are 1 second each
<duckfez> it (basically) buckets twice .. once to get a per-second count, a second time to get the max per-second-count for an hour (or day)
<duckfez> the part you lose is which second in that hour was the one with the max
<tmichael> damn duckfez – that appears to do it
<tmichael> cuz requestor doesn’t really care what time/second it occured, just what the max was over past 90 days
<duckfez> tmichael: I call that the “yo, dawg, I heard you like aggregates … so I put in aggregate in your aggregate so your can summarize while you summarize”
<tmichael> might add that in the email response!
<tmichael> duckfez: you never fail me. really appreciate the help here
Splunk love, redux
We’re not happy until you’re happy
<automine> it is days like today
<automine> when I am stuck with a piece of crappy software
<automine> with horrible documentation
<automine> and support
<automine> that I am very thankful that I spend the rest of my time dealing with Splunk
<jtrucks> every time someone comes to my office (a cubicle in a room of 5 cubes) and asks to add more stuff to splunk, at least one other person in the room shouts “SPLUNK ALL THE THINGS”
<BennyWy> jtrucks: sounds like my kind of office