Big numbers like that certainly show that training (of employees to not get phished) is worth it.
Somewhat lucky for everyone that in this case it was “only” money, no security or data breach, which has more lasting damage.
Big numbers like that certainly show that training (of employees to not get phished) is worth it.
Somewhat lucky for everyone that in this case it was “only” money, no security or data breach, which has more lasting damage.
I sometimes do things (cleanup, refacs) off-ticket / not part of the ticket. It can be a light alternative when other stuff is complicated and demotivating. Depending on your environment and team/contract setup, simply doing it could be more difficult though.
If it serves your satisfaction and productivity, and is good for the product, then it’s not wasted. Not everything has to be - or even can be - preplanned.
Or Modified Viable/Valuable Product
I don’t have experience with that in particular. I’ll share my more general, tangential thoughts.
MVP is minimal. Extending the scope like that makes me very skeptical (towards scoping and the processes).
Everything you are concerned with would be important topics for retrospectives, or even meetings with management. But of course those don’t exist or are open in all environments. In my team I could openly raise such concerns.
If you’re always rushing to a deadline, or feel like that, think of what you can do and influence to improve that. Retrospectives? Team disuscussions? Partly tuning out of management given focus and doing what you deem important and right? Look for a different team or employer?
Remember, when we do that, we are getting a binary file. But in my imaginary example, we are getting a ‘video description’ in form of PostScript.
The example is a reference/link with handling instructions.
When I think of video metadata, we already have and use formats for that, in a more general and versatile manner.
Maybe I’m missing the point, but sending instructions is a very different and restricted approach.
I like that we describe data and how it shall render. That way, the data is accessible for various interpretations and uses.
Where’s the source?
The post link is to the general website, with no indication.
The linked dotnet.social account describes itself with
Bot parrot for https://devblogs.microsoft.com/
If you like to bring #Microsoft #DevBlogs officially to the #Fediverse,
So I have to take away that it’s not federated after all. There’s only a bot sharing links.
Federated would mean more than a bot, right?
The title question is very broad, varied, and difficult. It depends.
For anything that is not a small script or small, obvious and obviously scoped takes, you can’t assess at first glance.
So for a project/task/extension like you wrote it’s a matter of:
Is there docs or guide specifically for what I want to do/add? If yes, the expectation is it is viable and reasonably doable with low risk.
If there is no guide the assessment is already an exploration and analysis. How is the project structured, is there docs for it or my concerns, where do I expect it as to go and what does it have to touch, what’s the risks of issues and unknown difficulties and efforts. The next step would already be prototyping, and then implementing. Both of which can be started with a “let’s see” and timebox approach where you remain mindful of when known or invested effort or risk increases and you can stop or rethink.
Regarding visual client: I’ve been using TortoiseGit since early on and no other client I’ve tried came close.
I use the log view and have an overview, and an entry point to all common operations I need. Other tools often fail on good overview, blaming through earlier revisions, filterable views of commits or files, or interactive rebase.
I’ve never found pressing modifier keys to be an issue. I’ll be mindful of my use today.
I guess the hold to repeat input (of letters) is not used much, so not a significant or noticeable loss when replaced. I’d certainly see false positives and having to type slower as deal breakers.
If you’re looking for collaboration or audience I’d stay with github. It’s too prevalent to skip for alternative niche with account signup and that elsewhere as a barrier.
If that’s no concern to you it’s viable.
how is this much different from have I been pwnd?
haveibeenpwned does not publish data. They provide a service of checking whether you are part of breached data. They operate as a trustworthy middleman without disclosure or sharing of data to third parties.
If you mean they also collect data like that, then yes. But what they do with it is very different from a leak.
You copied that function without understanding why it does what it does, and as a result your code IS GARBAGE.
AGAIN.
[…]
Debate continued for some time, in a cooler tone, with Torvalds offering suggestions on what he felt would be a better approach to the issues Rostedt hoped to address.
Harsh tone (in only two instances?), but he still invested in offering suggestions 🤷
I expected more behind the verb “flaming”.
Alternatives
“Like any secret, SAS tokens need to be created and handled appropriately,” said the MSRC team. “As always, we highly encourage customers to follow our best practices when using SAS tokens to minimise the risk of unintended access or abuse.
lol - follow our best practices - ironic. Of course documented best practices don’t mean everyone follows them, even internally, but that statement still makes for humorous irony. Ambiguous, almost implies, “follow how we did it here” in my reading.
Among other things, their access levels can be easily and readily customised by the user, as can the expiry time, which could in theory allow a token to be created that never expired (the compromised one was in fact valid through to 2051, 28 years from now).
[…] it’s not easy to revoke the token either […]
Reading this, the drive to managed cloud and centralization feels like an effort to replace memory management issues as the top vulnerability cause. We - as an industry - are more aware of those as ever, and have interesting efforts like Rust adoption. And at the same time, hierarchical access tokens you can’t easily revoke, with arbitrary configured lifetimes and access, that are hard to monitor, track, and trace (from reading this article) are introduced as an entirely new set of risk and attack surface.
Of course - cutting scope is a good call to keep it manageable and fun, and not end up with creep and what you wanted to evade in the first place. :)
Index categories are blog, docs, magazines. Have you considered indexing source code websites?
I thought I would remember a second one, but I can’t recall right now.
Subpaths on GitHub and GitLab would be a similar fashion but would require more specific filters - unless they are projects hosted on dedicated instances.
Project issue tickets may also be very relevant to developer searches!?
I have also a different problem, dev.to has a lot of good resources but also tons of SEO spam and low quality content. It’s also freaking huge and while it was for some time in the index I had to remove it and think about it some more.
Yeah, a public platform is unlikely to provide consistent content. If curation is not an explicit goal and practice there, I would not include them for the reasons you mentioned.
If indexing could happen not on domain but with more granular filters - URL base paths - that may be viable. Indexing specific authors on devto.
I think the main issue as well as my main question is around scope.
You say targets we developers, but the current index is quite narrow. So will you accept significant expansion of that, as long as it may be relevant to Web developers? Where would you draw lines on mixed c content or technologies?
ASP.NET docs is definitely docs for web developers. But maybe not what you had in mind. Would that apply? The docs are h hosted on a platform with a lot of other docs of the dotnet space. Some may be relevant to “Web developers”, others not. And the line is subjective and dynamic.
My website has some technological development resources and blog posts. But also very different things. Would that fit into scope or not?
How narrow out broad would you make the index?
I guess it’s an index for search, so noise shouldn’t be a problem as long as there are gains through/of quality content.
JavaScript itself provides the functionality jQuery became popular for. So no. Check the standard lib first before considering helper libs.
I can’t get what it describes through all the Jedi stuff and jokes.
One mentor and one apprentice? And everything else remains unspecified and open?
For FOSS, as it is described, it makes me think it’s a big investment with unclear risk (will they even stay as a contributor?). Which of course can be contextualized - but then what is left here?