How to deal with fear of taking dependencies The 2019 Stack Overflow Developer Survey Results Are InWhen should new C projects target very old C standards (>20 years old, i.e. C89)?Using third-party libraries - always use a wrapper?Is the Entity Framework appropriate when all you do is insert records in bulk?What document should describe usage of a third party library in a project?How does one keep argument counts low and still keep third party dependencies separate?Formal justification for use of third-party librariesHow to decide what library to use?Licensing, how to?Third party dependencies managementHow to have multiple source copies of a dependency in a C# git project?Bringing a large, complex legacy project under git control
Protecting Dualbooting Windows from dangerous code (like rm -rf)
Can a flute soloist sit?
"as much details as you can remember"
Is a "Democratic" Oligarchy-Style System Possible?
Earliest use of the term "Galois extension"?
What to do when moving next to a bird sanctuary with a loosely-domesticated cat?
Did Section 31 appear in Star Trek: The Next Generation?
Is "plugging out" electronic devices an American expression?
Are there any other methods to apply to solving simultaneous equations?
Time travel alters history but people keep saying nothing's changed
One word riddle: Vowel in the middle
What do the Banks children have against barley water?
Deal with toxic manager when you can't quit
How to answer pointed "are you quitting" questioning when I don't want them to suspect
Is flight data recorder erased after every flight?
Apparent duplicates between Haynes service instructions and MOT
Are there incongruent pythagorean triangles with the same perimeter and same area?
Did Scotland spend $250,000 for the slogan "Welcome to Scotland"?
Multiply Two Integer Polynomials
Can you compress metal and what would be the consequences?
Are spiders unable to hurt humans, especially very small spiders?
Geography at the pixel level
How are circuits which use complex ICs normally simulated?
Why do UK politicians seemingly ignore opinion polls on Brexit?
How to deal with fear of taking dependencies
The 2019 Stack Overflow Developer Survey Results Are InWhen should new C projects target very old C standards (>20 years old, i.e. C89)?Using third-party libraries - always use a wrapper?Is the Entity Framework appropriate when all you do is insert records in bulk?What document should describe usage of a third party library in a project?How does one keep argument counts low and still keep third party dependencies separate?Formal justification for use of third-party librariesHow to decide what library to use?Licensing, how to?Third party dependencies managementHow to have multiple source copies of a dependency in a C# git project?Bringing a large, complex legacy project under git control
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
The team I'm in creates components that can be used by the company's partners to integrate with our platform.
As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.
Some examples:
- We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
- We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.
- We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.
- All of the code for mapping to/from XML is written "by hand", again for the same reason.
I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.
architecture .net dependencies third-party-libraries code-ownership
New contributor
|
show 10 more comments
The team I'm in creates components that can be used by the company's partners to integrate with our platform.
As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.
Some examples:
- We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
- We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.
- We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.
- All of the code for mapping to/from XML is written "by hand", again for the same reason.
I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.
architecture .net dependencies third-party-libraries code-ownership
New contributor
16
Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?
– Blrfl
2 days ago
6
Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.
– Filip Milovanović
2 days ago
69
"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.
– UKMonkey
2 days ago
4
@UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p
– Bertus
2 days ago
7
That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)
– marshal craft
2 days ago
|
show 10 more comments
The team I'm in creates components that can be used by the company's partners to integrate with our platform.
As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.
Some examples:
- We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
- We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.
- We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.
- All of the code for mapping to/from XML is written "by hand", again for the same reason.
I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.
architecture .net dependencies third-party-libraries code-ownership
New contributor
The team I'm in creates components that can be used by the company's partners to integrate with our platform.
As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.
Some examples:
- We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
- We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.
- We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.
- All of the code for mapping to/from XML is written "by hand", again for the same reason.
I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.
architecture .net dependencies third-party-libraries code-ownership
architecture .net dependencies third-party-libraries code-ownership
New contributor
New contributor
edited yesterday
Bertus
New contributor
asked 2 days ago
BertusBertus
326127
326127
New contributor
New contributor
16
Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?
– Blrfl
2 days ago
6
Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.
– Filip Milovanović
2 days ago
69
"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.
– UKMonkey
2 days ago
4
@UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p
– Bertus
2 days ago
7
That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)
– marshal craft
2 days ago
|
show 10 more comments
16
Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?
– Blrfl
2 days ago
6
Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.
– Filip Milovanović
2 days ago
69
"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.
– UKMonkey
2 days ago
4
@UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p
– Bertus
2 days ago
7
That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)
– marshal craft
2 days ago
16
16
Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?
– Blrfl
2 days ago
Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?
– Blrfl
2 days ago
6
6
Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.
– Filip Milovanović
2 days ago
Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.
– Filip Milovanović
2 days ago
69
69
"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.
– UKMonkey
2 days ago
"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.
– UKMonkey
2 days ago
4
4
@UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p
– Bertus
2 days ago
@UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p
– Bertus
2 days ago
7
7
That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)
– marshal craft
2 days ago
That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)
– marshal craft
2 days ago
|
show 10 more comments
6 Answers
6
active
oldest
votes
... We are forced to stay on the lowest API level of the framework (.NET Standard) …
This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.
.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.
Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.
Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.
So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.
Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.
So yes, you definitely are taking this too far in my view.
12
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tinyeval
wrapper with some sanity checks that are easily bypassed?
– John Dvorak
yesterday
4
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
add a comment |
We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.
We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
All of the code for mapping to/from XML is written "by hand", again for the same reason.
This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.
If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.
In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.
3
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
4
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
add a comment |
On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.
For example, they may have signed a contract with their customers promising not to use open source products.
However, as you point out, these features are not without cost.
- Time to market
- Size of package
- Performance
I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.
If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.
If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?
10
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
10
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
And still people do it
– Ewan
2 days ago
add a comment |
Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.
New contributor
2
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
add a comment |
Basically it all comes down to effort vs. risk.
By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.
- Strengths: Less effort, because you don't have to code it yourself.
- Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.
- Opportunities: Time to market is smaller. You might profit from external developments.
- Threats: You might upset customers with additional dependencies.
As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.
add a comment |
Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.
That way if someone only wants "Core" functionality, they can have it.
If someone wants "Common" functionality, they can have it.
And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.
New contributor
add a comment |
protected by gnat 6 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
... We are forced to stay on the lowest API level of the framework (.NET Standard) …
This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.
.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.
Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.
Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.
So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.
Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.
So yes, you definitely are taking this too far in my view.
12
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tinyeval
wrapper with some sanity checks that are easily bypassed?
– John Dvorak
yesterday
4
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
add a comment |
... We are forced to stay on the lowest API level of the framework (.NET Standard) …
This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.
.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.
Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.
Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.
So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.
Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.
So yes, you definitely are taking this too far in my view.
12
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tinyeval
wrapper with some sanity checks that are easily bypassed?
– John Dvorak
yesterday
4
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
add a comment |
... We are forced to stay on the lowest API level of the framework (.NET Standard) …
This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.
.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.
Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.
Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.
So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.
Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.
So yes, you definitely are taking this too far in my view.
... We are forced to stay on the lowest API level of the framework (.NET Standard) …
This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.
.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.
Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.
Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.
So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.
Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.
So yes, you definitely are taking this too far in my view.
edited yesterday
Peter Mortensen
1,11521114
1,11521114
answered 2 days ago
David ArnoDavid Arno
29.4k75894
29.4k75894
12
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tinyeval
wrapper with some sanity checks that are easily bypassed?
– John Dvorak
yesterday
4
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
add a comment |
12
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tinyeval
wrapper with some sanity checks that are easily bypassed?
– John Dvorak
yesterday
4
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
12
12
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny
eval
wrapper with some sanity checks that are easily bypassed?– John Dvorak
yesterday
"You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny
eval
wrapper with some sanity checks that are easily bypassed?– John Dvorak
yesterday
4
4
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
"The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.
– Voo
yesterday
add a comment |
We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.
We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
All of the code for mapping to/from XML is written "by hand", again for the same reason.
This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.
If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.
In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.
3
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
4
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
add a comment |
We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.
We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
All of the code for mapping to/from XML is written "by hand", again for the same reason.
This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.
If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.
In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.
3
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
4
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
add a comment |
We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.
We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
All of the code for mapping to/from XML is written "by hand", again for the same reason.
This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.
If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.
In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.
We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.
The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.
We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
All of the code for mapping to/from XML is written "by hand", again for the same reason.
This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.
If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.
In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.
answered 2 days ago
berry120berry120
1,6421417
1,6421417
3
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
4
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
add a comment |
3
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
4
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
3
3
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
And even if you can't, switching to another library is still easier and better than rolling your own.
– Lightness Races in Orbit
yesterday
4
4
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.
– Lightness Races in Orbit
yesterday
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
"Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0
– Voo
16 hours ago
add a comment |
On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.
For example, they may have signed a contract with their customers promising not to use open source products.
However, as you point out, these features are not without cost.
- Time to market
- Size of package
- Performance
I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.
If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.
If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?
10
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
10
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
And still people do it
– Ewan
2 days ago
add a comment |
On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.
For example, they may have signed a contract with their customers promising not to use open source products.
However, as you point out, these features are not without cost.
- Time to market
- Size of package
- Performance
I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.
If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.
If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?
10
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
10
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
And still people do it
– Ewan
2 days ago
add a comment |
On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.
For example, they may have signed a contract with their customers promising not to use open source products.
However, as you point out, these features are not without cost.
- Time to market
- Size of package
- Performance
I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.
If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.
If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?
On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.
For example, they may have signed a contract with their customers promising not to use open source products.
However, as you point out, these features are not without cost.
- Time to market
- Size of package
- Performance
I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.
If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.
If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?
edited yesterday
Peter Mortensen
1,11521114
1,11521114
answered 2 days ago
EwanEwan
43.7k33698
43.7k33698
10
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
10
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
And still people do it
– Ewan
2 days ago
add a comment |
10
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
10
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
And still people do it
– Ewan
2 days ago
10
10
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.
– Bertus
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood
– Ewan
2 days ago
10
10
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
"they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.
– Stephen
2 days ago
And still people do it
– Ewan
2 days ago
And still people do it
– Ewan
2 days ago
add a comment |
Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.
New contributor
2
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
add a comment |
Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.
New contributor
2
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
add a comment |
Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.
New contributor
Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.
New contributor
New contributor
answered 2 days ago
Double Vision Stout Fat HeavyDouble Vision Stout Fat Heavy
692
692
New contributor
New contributor
2
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
add a comment |
2
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
2
2
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.
– TKK
yesterday
add a comment |
Basically it all comes down to effort vs. risk.
By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.
- Strengths: Less effort, because you don't have to code it yourself.
- Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.
- Opportunities: Time to market is smaller. You might profit from external developments.
- Threats: You might upset customers with additional dependencies.
As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.
add a comment |
Basically it all comes down to effort vs. risk.
By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.
- Strengths: Less effort, because you don't have to code it yourself.
- Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.
- Opportunities: Time to market is smaller. You might profit from external developments.
- Threats: You might upset customers with additional dependencies.
As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.
add a comment |
Basically it all comes down to effort vs. risk.
By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.
- Strengths: Less effort, because you don't have to code it yourself.
- Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.
- Opportunities: Time to market is smaller. You might profit from external developments.
- Threats: You might upset customers with additional dependencies.
As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.
Basically it all comes down to effort vs. risk.
By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.
- Strengths: Less effort, because you don't have to code it yourself.
- Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.
- Opportunities: Time to market is smaller. You might profit from external developments.
- Threats: You might upset customers with additional dependencies.
As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.
answered 2 days ago
Dominic HoferDominic Hofer
1243
1243
add a comment |
add a comment |
Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.
That way if someone only wants "Core" functionality, they can have it.
If someone wants "Common" functionality, they can have it.
And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.
New contributor
add a comment |
Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.
That way if someone only wants "Core" functionality, they can have it.
If someone wants "Common" functionality, they can have it.
And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.
New contributor
add a comment |
Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.
That way if someone only wants "Core" functionality, they can have it.
If someone wants "Common" functionality, they can have it.
And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.
New contributor
Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.
That way if someone only wants "Core" functionality, they can have it.
If someone wants "Common" functionality, they can have it.
And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.
New contributor
New contributor
answered 10 hours ago
Turtle1363Turtle1363
1012
1012
New contributor
New contributor
add a comment |
add a comment |
protected by gnat 6 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
16
Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?
– Blrfl
2 days ago
6
Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.
– Filip Milovanović
2 days ago
69
"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.
– UKMonkey
2 days ago
4
@UKMonkey: allow me to rephrase: we don't link with any third party libraries. :-p
– Bertus
2 days ago
7
That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)
– marshal craft
2 days ago